Hacker News new | past | comments | ask | show | jobs | submit login
Class Hierarchies? Don't Do That (raganwald.com)
333 points by johanbrook on March 30, 2014 | hide | past | favorite | 240 comments

"the real world doesn’t work that way. It really doesn’t work that way. In zoology, for example, we have penguins, birds that swim. And the bat, a mammal that flies. And monotremes like the platypus, an animal that lays eggs but nurses its young with milk."

Former biologist here. Actually...most living things do work that way. A human ISA primate. Genetically. Functionally.

If you merely focus on outward behavior, you get to the same place that early biologists got to with what was called "morphological classification" -- you find the weird examples of convergent evolution (e.g. "OMG egg-laying mammal!"), and you're tempted to throw out the whole classification system, even though it mostly works, and the errors are merely distracting from the inherent truth of the idea (that we're all related by genetic phylogeny; we literally share implementation).

Anyway, programmers, learn from biology: when you see these kinds of errors it probably means that you're classifying things incorrectly, not that you should stop classifying altogether.

What something is can change greatly based on context. This means how we classify them (generalize/group) can also change. An apple may be used to grow a tree, as a prop in a game, as fertilizer or consumed.

That being said.

Classifying things using inheritance leads to a software architecture that is difficult to change. This is especially the case if you follow the principal of don't break interface.

In the real world, people can build out classification systems and easily add in edge cases when we find new ways things can be classified. In software, that could lead to a complete change in the software architecture.

Bravo sir, that was a fantastic and succinct explanation of the difference between inheritance in the real world vs in OOP.

There is a classification system, but it's a neat 'cubby-hole' approach to systems biology. The reality is that not everything has a clean parent-child relationship. For example, humans have the ABO blood group system. If you have A type blood you are not a 'subclass' of a human with no blood type.

All ridiculous discussions of biology aside, having a class based hierarchy is a maintenance issue in comparison to designing classes through compositing. There are 2 ways to design a 'cheeseburger': a) meat sandwich is a super class of burger which is a super class cheeseburger b) meat sandwich which contains, bun, cheese, patty lettuce

B is a more maintainable system and allows for better class flexibility in the design. It avoids having complex bugs where 'meat sandwich' changes some internals and breaks something down the hierarchy chain in some subtle way (for example in releasing a access to a critical resource).

If I understand it correctly, the author is saying "Is-a" is a poor often broken means of building an Eco-system. And that interfaces are more correct - "has-a" is more flexible and useful.

In my limited understanding of biology this works fantastically well. For a platypus, "Is-a" is ridiculously broken, and one would not want to try and reproduce the underlying source code. However for classifying and interacting with the platypus "has-a" warm blood, "has-a" egg-laying-thing (#) "has-a" receptor for x is all we care about.

I suspect that is-a works fine at the cellular level and tends to break higher up - and that is pretty much my experience - in software we want to build Eco-systems - and forget that an individual humans body is more an Eco-system than a single machine.

interesting food for thought

(#) what I said about limited biology

I think there's some confusion about the analogy being drawn here. Biological classification is, like you said, a question of shared implementations. That's useful and it works especially well in biology because organisms are largely built through specialization; i.e. it really is a tree, where ancestry groups organisms by shared genetic material.

But to begin with, we're not classifying things; we're building them. Software is designed; you don't (or at least shouldn't), construct your programming abstractions by progressively mutating the different parts of your code. Nature had to reimplement flying for the bat, but you don't have to! You get to build a bat by strapping wings onto a rat, and you know how to build wings because you already did it for birds. But the hierarchy system doesn't let you do that; you'll have to do what nature did and completely reimplement flying for the Rodent subclass of Mammal, since Bird is nowhere near it in your hierarchy diagram. That's the essence of the author's point, as I read it.

I suspect Raganwald would probably go a lot further than an OOP+mixin approach (and there is no shortage of good alternatives), but the simplest way around this kind of thing is something like this piece of Ruby:

  class Bat
    include Flying
    include Rodent
    include Echolocation
You get the idea, and it's pretty easy to port to JS. It also lets you build the griffin and the centaur, because the software doesn't have to follow nature's rules about how to derive one thing for another. Constraining your code's structure to evolution's is-a model is a straight jacket.

So that's implementation, but what about interface? We actually do care about outward behavior, and making our interfaces reflect that is important. Morphological classification may not tell you much about how a bat works inside and what other species a bat is evolutionarily close to, but it remains the case that if you want to know what a bat does, flying is a big part of that. What if I want to know whether a given animal can get over my fence? How do I ask if a creature can fly? I don't care at all that bats are closely related to mice because they're acting nothing like mice when zooming over my fence. Do I really add canFly() to some Animal superclass and have flying creatures override it? How many features will I detect that way? But with our mixins, we can just ask if it includes Flying, because the structure of our code captures that. Codifying the structure of your interface into the structure of your classes is part of the point of OOP, but it turns out that interfaces don't fit a simple inheritance mold. So you need more powerful tools to describe that structure.

This issue comes up enough in gaming that the no-hierarchy-lots-of-mixins approach has a name: the entity-component model. I bring this up not because I think the point applies differently to games but because it's a domain where you might actually build a bat.

Finally, how far would we really be willing to take biology's classification-by-implementation philosophy, even if classifying pieces of code were really our goal? We could end up with superclasses like UsesAStar and HashSet. Not really the point of all of this, right?

Long story short: I agree with the article and think the biology doesn't contradict it, but think we should probably stop using evolution as a conceptual model for how we organize our code.

> Software is designed; you don't (or at least shouldn't), construct your programming abstractions by progressively mutating the different parts of your code.

Is this another ad for big design upfront, aka waterfall? It is a nice ideal, but when you get down into it, software really is evolved since requirements are constantly changing and your understanding becomes better.

> You get to build a bat by strapping wings onto a rat, and you know how to build wings because you already did it for birds.

Really you don't. The bat flies quite differently from the bird, with substantially different attributes. Strapping bird wings onto a rat often isn't going to work.

> This issue comes up enough in gaming that the no-hierarchy-lots-of-mixins approach has a name: the entity-component model.

This is actually much older than that (Common Lisp Flavors), and languages like Scala include mixin-like traits, not to mention the work of Bracha and Cook. Inheritance is still heavily involved in that though, it is not "mixins or inheritance" but "inheritance with mixins," it still very much is OOP, and you are still defining a DAG if not tree.

> Finally, how far would we really be willing to take biology's classification-by-implementation philosophy, even if classifying pieces of code were really our goal? We could end up with superclasses like UsesAStar and HashSet. Not really the point of all of this, right?

Why not? Why not classify all of our code at the finest grain we can? Of course, existing languages are poorly suited to that task, but we can always define new languages where this isn't a PITA. Or we could just use machine learning to figure out what the connections are automatically (this will happen eventually, just not for another 10 or 20 years I hope).

> Is this another ad for big design upfront, aka waterfall?

Not at all. You won't even know what the right abstractions are upfront. If you want to express the point here in terms of coding process, it's something like "refactor your code into this". That the design emerges iteratively isn't the same thing as saying there's no design. Contrast to evolution, which is not just iterative but also has no design at all. No one gets to sit around and think, "huh, let's reorganize this". Because if they did, maybe organisms in convergent evolution would share code.

Edit, let me try that point again, because I don't think that was very clear. You may design iteratively, but you're still designing. "Oh, these things have x in common, so let me build an abstraction that captures that commonality and have them use it." But evolution can't do that. The only design tool it has is mutating existing code into additional subclasses. You have more tools than that, so as a metaphor evolution's "designs" can be described by simpler ontologies. So why limit yourself to those ontologies? You can think of evolution as having ridiculous limitations on the kind of design work it's capable of, which are independent of the fact that it's iterative.

> Strapping bird wings onto a rat often isn't going to work.

That you have to implement genuinely different things differently is true enough, but not really challenging the point that class hierarchies make encapsulation harder. Also: maybe an engineer would design a bat a bit differently inside so that it could use the avian wing tech? Just a thought.

> Why not classify all of our code at the finest grain we can?

My point there was that if you had choose one axis on which to classify parts of your code, implementation details wouldn't likely be it. That you have to choose is a consequence of the strict single-inheritance model here. Completely classifying a software abstraction is identifying the point n-dimensional space that describes it in terms of all the different ways to categorize it. A fool's errand in general, I suspect, but at any rate not possible under the class inheritance model (single or multiple).

Edits: multiple for clarity

> Edit, let me try that point again, because I don't think that was very clear. You may design iteratively, but you're still designing. ... You have more tools than that, so as a metaphor evolution as a designer can be described by a simpler ontology.

I would claim that our process of building things very much follows the process of evolution even if we have the ability to design explicitly rather than perform random mutations follows by natural selection. Our ability to classify things and carry over implementation fits with this way of building. OOP is not just a way of implementing a design, I would say this is actually secondary: its a way of coming up with a design in the first place where you really don't have it all down in your head before you start coding.

This is what so annoys me about Haskell: it works very well when you can figure out everything before hand; in fact, you are expected to think long and hard about your problems because its language abstractions are very unforgiving otherwise.

Haskell designs are then elegant out of necessity, and this is very limiting when you just need to get something done (aka worse is better). OO implementations are typically over-engineered and less elegant, not because OO thinking is somehow defective, but because elegance is not required to encapsulate complexity and just get something working.

> My point there was that if you had choose one axis on which to classify parts of your code, implementation details wouldn't likely be it. A fool's errand in general, I suspect, but at any rate not possible under the class inheritance model (single or multiple).

Implementation details provide the strongest signals of similarity than artificial interfaces. If you were going to let a DNN or RNN classify your code, it would probably be along this axis as well as with how the code was actually used.

> That you have to choose is a consequence of the strict single-inheritance model here.

I rarely let myself be constrained by single inheritance, even when I'm coding in C# I have my mixin patterns handy (ya, more boilerplate, but I can type fast; all my own languages support mixins).

> Completely classifying a software abstraction is identifying the point n-dimensional space that describes it in terms of all the different ways to categorize it.

I agree, but if we aren't limited by single inheritance (say linearized multiple inheritance via mixins...or gasp...aspects), then this argument no longer holds.

I'm not an absolutist on any of this either, and in practice I make classes inherit each other when it's convenient and makes my code work well. And it sounds like we agree that using mixins is a powerful way of avoiding the ontology traps that lurk in class inheritance structures, and that not having them is painful and limiting [1]. So I'm not even sure there's a real disagreement here.

[1] I too used mixins in C#. Not sure how you'd do it now, but at the time you had to use Castle DynamicProxy and it was...unpleasant. But worth it.

In C#, you define an interface, and in the same namespace as the interface, you define a set of extension methods to the interface. Any class that implements the interface gets the extension methods. The interface can even be empty.

    namespace Yarp{
        public interface IMixinFoo{
        public static class IMixinFooExt{
            public static void DoBar(this IMixinFoo _this){

    namespace Grog{
        using Yarp;
        class Baz : IMixinFoo{

            public static void Main(string[] args){
                Grog g = new Grog();

                g = null;
                g.DoBar(); // still works, if DoBar guards against a null-ref exception

I find myself not using extension methods so much these days unless I want behavior for a specific instance of a generic type. The problem is state and polymorphism, which can be handled well enough via delegation to a nested mixin object + an interface to tag objects that have this nested mixin object.

With C# borrowing "everything must live in a class" from Java, extension methods are the only way to do sane functional programming in C#. That, LINQ is developed entirely through extremely generic extension methods. Indeed, extension methods were added to C# to enable LINQ, which was created to drag .NET developers over to FP.

Extension methods are great for writing anything that chains a lot of method calls together, specifically because they aren't methods on the object. Because it's possible to call an extension method on a null instance, you don't have to inject arbitrary error handling in the middle of your call chain. You can do it at the function site.

That said, there are a number of places where LINQ doesn't chain very well. It's built-in aggregating functions, for one. Average, Sum, Min, etc., don't know how to handle a 0-sized collection. And of course, those functions are notionally not defined for 0-sized collections. But I think a better design is to allow the user to specify a default value in the case of a 0-sized collection, or to just return null (signifying "there is no Average, there is no Sum"), rather than throw an exception. So I have a polyfil of sorts to make similar extensions that play nicer.

The problem is the types. Those methods do know how to handle 0-sized collections, but they must be operating on nullable types.

Enumerable.Empty<int?>().Average() returns null.

If you have a sequence of integers, you can get the behavior you want by converting the elements to nullable types like this. seq.Average(x => (int?)(x))

Ah, yeah, I'd forgotten about extension methods and how they could be used as mixins. I did indeed switch to that when C# 3 came out. I guess I was just remembering earlier pain.

Hate K&R brace style in C#

I use the tiny brace extension for visual studio that sets brace lines to 3 pt font size. My C# code resembles Python code, for the most part.

I did it for brevity. You will notice that the variable names also don't make sense.

By the way, you couldn't instantiate a Grog as it's a namespace. I think you meant new Baz(). Still a neat and instructive example.

Yes, but I like these variable names

C# doesn't have K&R brace formatting... the first brace is placed on a new line (unfortunately).

That is completely a stylistic preference. When I write C#, the opening braces are placed on the same line as the preceding control structure.

It's not a crime, but Allman bracing is normally used. Every official guidelines, every programming book, every open source project I know of, uses Allman for C#, so anything else stands out like a sore thumb. It's about convention, consistency and resulting readability. I'm not a fanboy of bracing style :) I think that in Rome you should do as Romans do. When I use Java or PHP, I use K&R - it's a context thing, it would just look weird and out of place otherwise. I believe it's better to embrace the "native" (common) coding style in each language, unless you only ever work on your own.

Allman bracing sucks on wide screen laptops, and it was only when I found the 3-font point brace line Visual Studio extension that I was able to switch. I'm really getting annoyed with braces, we should get rid of them.

Python replaced them with indentation. Funnily enough it is one of the most complained about things with Python. I hated it at first, and its still a pain when switching between JavaScript and Python, but you never get the mismatched brace problem which more than makes up for the problems.

Well coding on laptops sucks anyway :)

Fair enough. I just compose my mixins directly and bridge the gap through delegation. I wish these rants included the obvious solutions, I'm really tired of the argument of OOP sucks because...Java. Or OOP sucks because...single inheritance. There are plenty of things we can do to make our languages not suck, and this is not a fundamental limitation of OOP.

Upvotes for an intelligent debate.

"This is what so annoys me about Haskell: it works very well when you can figure out everything before hand; in fact, you are expected to think long and hard about your problems because its language abstractions are very unforgiving otherwise."

This is not my experience at all. I find refactoring in Haskell to be quite pleasant, and you can generally start from something small and build up. You do have to get the types to line up before it will build, but if the types aren't lining up your logic is likely incorrect, and figuring out why can be instructive.

"Is this another ad for big design upfront, aka waterfall?"

Interesting comment. I have no particular leanings towards any methodology[1], but I do find it useful to understand a situation before presuming to have appropriate ideas about changing it[2].

To many that constitutes big design upfront. So be it. My projects tend to deliver on time, and to budget.

[1] https://www.wittenburg.co.uk/Entry.aspx?id=99bb5987-e08d-4e8...

[2] https://www.wittenburg.co.uk/Entry.aspx?id=46870dcd-70cb-4ef...

Anecdotally: The best projects I've ever worked on (happy clients, met budgets for time/cost) had the most up-front planning and thought on the requirements.

The worst projects, with runaway costs, unhappy stakeholders and a real brutal grind for developers had the least requirements planning.

I've seen code design up-front go wrong often enough, but I've yet to see up-front specification and requirements gathering be anything but much more successful than the alternative.

As a matter of fact, knowing you have to deliver C, but then focusing development on A, with an idea that there might be a B, and not putting any real thought into any of them is so so painful IME. Nobody ends up happy. When I hear "iterative development" these days from a developer, I don't hear "pragmatically responding to a change in requirements". I hear "I don't feel like thinking this through, so I'm going to do something I know doesn't meet the requirements first, and we'll rework it at some point".

I've rarely seen core project requirements change much over the course of a project. I've seen developers cargo-culting and generating rework a number of times though.

As an early "Agile" fan, who was a developer when the Manifesto debuted, it's real frustrating that today's development landscape seems to promote the idea that planning is bad and you should just start diving into writing PostIt notes and setting up your first week's sprint without a thought to an overall schedule.

That's not Agile, that's just cowboy coding with the appearance of rigor (IMO). The whole point is to make a plan, research, investigate, get a solid idea of what you need to do, then do it. And while you're doing it, if circumstances change, be prepared to change with them.

> "Responding to change over following a plan".

> "while there is value in the items on the right, we value the items on the left more"

Which totally makes sense. Yes, don't stubbornly insist on delivering a feature the stakeholders don't want just because it's in the plan. And if you have a fixed time budget, yes, focus on the scope that's most important and pick your battles. None of that says "don't plan". "there is value in planning". That is what the Manifesto says! So stop using "agility" as an excuse to do a bad job, generating tons of rework, burning people out, and spending other people's money. There's nothing "agile" about that.

"Iterative Development" is just code for exactly that IME. Kaizen is about improving your process. Not throwing it away.

Sorry if that's a bit too ranty. ;-)

I don't disagree that encapsulation has a place, but it's not the universal right answer. Sometimes it makes sense to use inheritance, and sometimes it makes sense to use encapsulation.

The article was arguing that inheritance is wrong, and using a (broken) biological metaphor to back up that argument. In fact, it's pretty easy to argue that the coding example was broken in the same way -- you could resolve the stated dilemma (using a transaction history instead of a current balance) by creating a new subclass at some level of the original hierarchy. And if you can't do that, you possibly had a broken design from the start, or you really did need to re-work all of your original code to adopt to the new implementation (or both).

Mixins don't make your code magically adaptable to re-implementation. They force you to encapsulate more stuff, but you could just as easily design an inheritance hierarchy with the same properties (i.e. make subclasses all work with private data via protected methods), or you could adopt a hybrid approach -- for example, one could imagine making the class hierarchy in this post using a Strategy pattern to implement the actual tracking of balances.

Usually, when people have an axe to grind with inheritance (or encapsulation), it's because they've run into the problem that code design is subjective and imperfect. Programmers don't like that much, and it's more fun to blame the tools.

Can you sketch out how you would do it with inheritance, and why it would be a better solutions?

Do you have any other good examples of situations where it makes most sense to use inheritance?

I think the fundamental problem though is that there are plenty of areas in biology where this breaks down on a practical level. The problem is actually a nomological problem, which deals with the limits of ontologies. Do "species" exist as such? Or are the limits which we place on them projected by us? Obviously a bat is not a dolphin, but where do you draw the line between species and subspecies, between one species and another closely related one, or between one genus and another closely related one?

These turn out to be remarkably tricky questions which is why Linnean classification is not as stable as we would like to believe..... It mostly works, but only to some extent, which is the problem.

And with the limits not only of ontologies, but of models in general.

All models are wrong. Some are useful. As you pointed out, there is no such thing as a species in the real world -- it's just in the model that there's a species.

The model has been so successful that, unfortunately, it's treated as "true" by many people, which means that 1) they don't understand its limitations, and 2) they don't work to improve it.

+1, I'm glad you mentioned this.

The example that comes to my mind is Rosa gallica vs Rosa alba vs. Rosa damascena. Trying to understand why these are species rather than subspecies, given the ease of interbreeding (over many generations and hundreds of years), leads to very big headaches.

To properly model all species with inheritence, species have to inherit from ancestors in the lineage. A Human IsA Primate because proto-primates evolved before humans did. The hierarchy is a graph with a generation (time) axis.

The dissonance comes in with timeless ontologies. Ontologies whose members don't share a common ancestor. Rather, the members share traits. Inheritance works in some cases, but not in all. Once it stops supporting all cases, it causes unnecessary complexity to the model.

All models are wrong. Some are useful.

You're confusing the model with reality. The classification system is simply a model that describes many things correctly, and many other things incorrectly. If you feel that there is inherent truth of the idea, then that probably means that it's a really good model. But it's still just a model.

For example, you said that a human ISA primate. But that's only in the model. Outside of the model, there is no such thing as a primate. Reality is far more complicated, and "human ISA primate" misses many corner cases. You also said we literally share implementation, which, again, may be useful but isn't quite true, whether one is talking about DNA, epigenetics, or macroscopic properties.

But that's okay -- it's still a good model, because it's useful and simple and easy to understand and works much of the time. We just have to be careful not to forget its limitations and applicability.

But that's an example of why inheritance-as-subtyping is dubious - sharing implementation doesn't -> sharing behavior, and for subtyping common behavior that can be relied on by clients is what we want.

It is pretty regularly pointed out in discussions of evolution, that if someone were to try and "intelligently design" lifeforms, they would be better off mixing and matching components instead of trying to pile on with inheritance like you find in nature. (That's how you get a blind spot in the human eye, or the giraffe's crazy laryngeal nerve.)

Sure, if we take that analogy further, what happens during environment change (aka requirement change)? With biology you get extinction, a whole class of organisms dies off. You then create new classes from some root branch. Class based code base works similarly. You end up deleting a bunch of classes then create new ones. I think we can do better than this.

The comparison of real life biology and the software problem is superficial in the first place.

No one is releasing ‘Biological life forms 2.0’ and worrying about whether penguins stop working.

The problem is that writing reusable software is hard, inheritance is a really easy way to introduce reusable code into your software but a bad abstraction or base class has viral effects.

Yes, but does Human inherit Neanderthal? If we don't know this beforehand, how do we implement both classes?

Yeah, that the penguins and bats example would be a problem for class hierarchies is totally silly. Those were the exact examples that were originally used to teach me about class hierarchies; you can _override_ stuff like that in subclasses. That was the entire point.

what is the programming equivalent of "morphological classification"?

what is the morphology of a class? I would argue that it is the interface. That is the part that determines the form that other programs interact with. that's the part that actually matters for object oriented design.

so while you're right that in biology morphological classification is a distraction that belies the existence of deeper relationships, in computing morphological classification is actually the most important part and what we should focus on.

class Penguin : Bird{ override void Fly(){ throw new InvalidOperationException("can't fly"); } }

But now you can't assume that Birds can Fly. You have to catch InvalidOperationException on every invocation. And what's special about Fly? What above other Bird methods, surely the same can be done to all of them? Now you can't assume anything about a Bird. So what good is the Bird abstraction then, if you can't make any assumptions about it?

This is breaking the Liskov Substitution Principle[1]. By saying that Penguin is a subclass of Bird, you're implying that I should be able to stick a Penguin into any code that expects a Bird, that code will just work, without it having to care about the particularity of that Bird actually being a Penguin.

[1] http://en.wikipedia.org/wiki/Liskov_substitution_principle

[edit: grammar]

Nit picking here, but birds as a biological class are defined as having wings as opposed to being able to fly. So the bird abstraction allows you to assume a bird has wings, and if it has wings it is likely able to fly (but not guaranteed).

Overall agreed, though. This ties into timr's point that if there are errors in your classifications, then your class hierarchy is incorrect.

Well, yes, indeed. It depends on the purpose of the abstraction. It may well be that Fly() should not be a method of Bird (maybe we need a FlyingBird subclass, or, even better, a Flying mix-in). My objection was to the "solution" of throwing InvalidOperation, not in support of the assumption that birds can necessarily fly (note the capitalization in may initial remark).

Throw that InvalidOperationException again and I'll make your life exceptional once more. extremely. XD

Mostly is as good as not at all though. Yes, you should stop using class hierarchies. Composition is the path to salvation.

Thanks for the input - the more cross disciplinary knowledge flow the better. I like your take away in particular. The author is basically saying 'All this OOP stuff that most of the industry has been doing for 20 years? All wrong! Do it my way.' Hearing that biology still uses directed non-cyclic graphs for classification (eg, single inheritance) shows just how powerful single inheritance really is and that we maybe shouldn't be so fast in throwing it away because it doesn't match something.

I think sticking to the 'age old wisdom' of using direct inheritance for IS-A relationships and composition for HAS-A is still the right way to go. It's been tested, it works, and using HAS-A for everything makes for less understandable code imo. Often you can fix an IS-A by simply refactoring your graph with a better understanding of the problem space - as it turns out biologists have done as well.

In my personal experience, I've never found a true IS-A relationship in my code. I've found lots of interfaces, though.

A lot of the time, you'll think you're defining a superclass when you start writing things like "Square IS-A Shape; Rectangle IS-A Shape". But Shape will turn out to not define any common behavior, just a category you want to restrict your inputs and outputs to; and you'll want to be able to assert that anything is, in addition to whatever else it is, a Shape. So shape is an interface.

A lot of the time, you'll think you're defining a superclass when you start writing things like "User IS-A Person; ProjectOwner IS-A User". But it'll turn out that you want to keep People around even when they stop being Users, and to keep Users around even when they stop being ProjectOwners. So you'll rearrange things and find that you're now asserting "User IS-A Role; ProjectOwner IS-A Role; Person HAS-MANY Roles." And Role turns out to be, again, an interface.

The only example I can think of that does fit single-inheritance is when modelling objects that directly express a genealogy. For example, GithubForkOfProjectA IS-A ProjectA, or Ubuntu IS-A Debian. But these aren't typically things you'd express as types; they're just instances (of, respectively, GithubProject and LinuxDistribution.) Each instance HAS-A clonal-parent-instance.

I guess there's one possibly-practical use of inheritance which I've nearly implemented myself: if you force your database schema migrations to always follow the Open-Closed Principle, and you want to migrate the rows of a table as you encounter them to avoid taking the DB offline, then you could have two separate models for a Foo table, FooV2 and FooV3, where "FooV3 IS-A FooV2". Each row has a version column, and is materialized as the model corresponding to that version. Your code that expected FooV2s would then be satisfied if it was passed a FooV3.

Does anyone actually do this, though? I don't just mean "row-by-row migrations", I mean the "with two models, one version inheriting from the other" part. And, if so, what do you do when you make a change to a model that doesn't obey the Open-Closed Principle: where FooV3s break the FooV2 contract?

The one case where languages without traditional OO inheritance are painful is where you want to do "like this other thing, except for this one particular thing that it does differently". And sure, maybe that's always bad design - but it comes up a lot in real-world business requirements. For all the people saying "you should use alternatives to traditional OO" I've never seen an actual example of how to do this better - you can patch those instances at runtime (urgh), you can create an object that implements the same interface and delegates to an instance of the base type (much less readable in every language I've seen, and effectively reimplementing inheritance without the syntactic sugar).

I think the right solution is simply to have firmer constraints about the relationships between parent and child classes - just like decoupling a class's implementation from its interface, it should be possible to separate out the interface it exposes to child classes as well. The one library/framework I've seen that does this really effectively is Wicket - it makes extensive use of final classes, final methods, and access modifiers to ensure that when you need to extend Wicket classes you can do so via a well-defined interface that won't break when you upgrade your Wicket dependency. It works astonishingly well.

You're correct in that special-case "like this other thing, except for this one particular thing that it does differently" objects happen all the time due to business rules.

But the Decorator pattern is not "inheritance without the syntactic sugar." Decorated objects, unlike subclass instances, are allowed to break the contract of the object they decorate: they don't have to claim to obey any of its interfaces, they can hide its methods, they can answer with arbitrarily different types to the same messages, etc.

If a language made defining decorators simple, I think it'd remove a lot of what people think of as the use-case for inheritance. (I mean, you aren't supposed to use inheritance for Decorator-pattern use-cases--it will likely break further and further as you try--but people will keep trying as long as the first steps are so much easier than the alternative.)

> If a language made defining decorators simple, I think it'd remove a lot of what people think of as the use-case for inheritance. (I mean, you aren't supposed to use inheritance for Decorator-pattern use-cases--it will likely break further and further as you try--but people will keep trying as long as the first steps are so much easier than the alternative.)

I actually agree with this, and I'd be interested to hear of language efforts in that direction. But until there's this easy way to do decorators, just telling people "don't use inheritance" isn't going to work.

> you can create an object that implements the same interface and delegates to an instance of the base type (much less readable in every language I've seen, and effectively reimplementing inheritance without the syntactic sugar).

In case you're hankering for a language that makes delegated composition easier, Go's "embedded fields" are sugar for exactly that: http://golang.org/ref/spec#Struct_types

Class inheritance should be about the data not the behavior. If you're looking for "IS-A" inheritance, you should look at: what parameters does the class require on construction. If your implementation isn't fundamentally based around the same construction parameters, then it shouldn't be a derived class.

For example: HTTPResponseHandler IS-A TCPResponseHandler because its primary role requires a TCP socket on construction. The only inherited methods should be management of the data from construction.

Problems with this only arise when people use subclass to gain the interface but not the data. You should never need to do this and it's a problem with using classes instead of interfaces for your parameter declarations – it's not a problem with class inheritance.

Except that interface are not supposed to have behaviors.

Shape can lay itself out with regards to its enclosing rectangle for example.

From my experience, using IS-A only works on simple concepts, and almost never on more than 2 layers. But it's still useful.

I suppose that if languages had the same "automatic function redirection" to sub-components, such as Golang, then people would use composition a lot more, though. That's actually the thing that seduces me the most about this language.

> Shape can lay itself out with regards to its enclosing rectangle for example.

How so? I mean, sure, Shape can declare, say

    bool is_inside(Rectangle enclosing_rect)
...but how would Shape know how to calculate that? It'd be an abstract method. All of Shape's methods would be abstract methods. Thus, a Shape is an interface: a contract an object makes with the system to say that it has a given API.

But this is an excellent example of where Shape can have concrete code in it though. is_inside is true iff the shape's bounds are inside the rectangle. So assuming you have a means to test rectangle bounds:

    bool is_inside(Rectangle enclosing_rect) {
       return enclosing_rect.contains(this.get_bounds());
Now you don't need to implement cookie cutter is_inside methods everywhere, but rather the simpler get_bounds(). In fact, you can add is_inside after all your shapes are implemented and it will just work, as long as they have get_bounds.

Maybe you want to add some sanity checking to make sure bounds never have negative width, so you implement get_bounds() in Shape, and implement get_left(), get_right(), get_top(), get_bottom() in the child classes. Now you don't have to add cookie-cutter assert(left < right) stuff everywhere, just in one place.

Other similar ideas: if you write an "intersects(Point)" method on each shape, then you can pull a concrete implementation of sampling-based area approximation up to the base Shape class, and leave analytical area calculation to the children. This is useful if you are working with distance-field, parametric or noisy shapes, for example.

The template method pattern is a great example of how you might define some behavior for is_inside, without completely defining it.

I've been building a game engine in JavaScript as an exercise for the past few weeks. Using IS-A for my base classes and HASA composition for various needed and shareable behaviours works wonderfully, is super testable, and makes sense. Seems like the answer in this case does indeed lie somewhere in the middle of two extremes.

Agreed. The following line of code breaks self-encapsulation:

    this._currentBalance = this._currentBalance - cheque.amount();
"In JavaScript (and other languages in the same family), classes and subclasses share access to the object’s private properties."

Self-encapsulation ensures internal variables are only directly referenced twice: via accessors. This adheres to the DRY principle. The above line should be written:

    balance( balance() - cheque.amount() );
Or more explicitly as:

    setBalance( getBalance() - cheque.amount() );
Even if the superclass is later modified to use a transaction history, which would violate the Open-Closed Principle, the subclasses would continue to function correctly. (The transaction history would be implemented inside the accessors, making the additional code transparent to subclasses.)

Source code that eschews self-encapsulation will be brittle. Developers must grok the DRY principle. Code that directly sets or retrieves internal variable values in more than two places should be refactored to eliminate the duplication.

Also, the Account class is incomplete. The Account class should have a withdraw method to mirror the deposit method. The ChequingAccount would overload the Account's withdraw method to take a cheque object, such as:

    ChequingAccount.prototype.withdraw = function (money) {
      super.withdraw( money typeof Cheque ? money.amount() : money );
      return this;
I am unfamiliar with the JavaScript OO syntax to call the superclass, but this revised design is otherwise sound. In this way, the parent class can vary its account balance implementation (e.g., introduce a transaction history) without affecting its children.

> maybe shouldn't be so fast in throwing it away because it doesn't match something

I think this touches on a big difference between science and software. In science, models are not "thrown away" because there is no need to throw them away. Multiple models can exist side-by-side, whereas it seems to me that this is not the case in software.

For example, Newtonian mechanics were great for a really long time. Then relativity came along and was found to be superior to Newtonian mechanics in some situations. Does that mean that we threw out Newtonian mechanics? No! It's still a useful model, regardless of the fact that it's wrong in some cases.

The point I'm making is that biology uses "single inheritance" not because it's right, but because it's useful in some cases. In other cases, where it breaks down, you will be forced to use a different model.

> still uses directed non-cyclic graphs for classification (eg, single inheritance) shows just how powerful single inheritance really is

Perhaps I'm missing something, but I think the article makes the point that single inheritance results in a tree structure, not a DAG. That aside, I feel it's unfortunate that he chose to illustrate his point by way of analogy, as it seems some of the arguments here are refuting the analogy and not his original point.

My personal experience has been that as software grows in complexity, isa relationships tend to fall apart. Generally this is because although they have some characteristics in common, they only have SOME characteristics in common. Further, the more the tree grows in breadth, the fewer the shared characteristics there are. Over time, this overlap becomes so small as to utterly rob the isa relationship of meaning. Saying X is a Y simply means that (generally for historical reasons) you chose to emphasise the commonality of X and Y over the equally valid relationship that X is a Z. Oh and often an A and a B.

I say grows in breadth because, with inheritance, having a tree grow in depth suggests specialisation in each branch WITHOUT behaviour shared between some (but not all) nodes in different branches of the tree. In practice I've found this happens so infrequently as to barely worthy of consideration.

>> when you see these kinds of errors it probably means that you're classifying things incorrectly

I accept that this may be entirely true. However, I have seen significant resources (time, mental effort, etc) dedicated to discovering the correct classification - although that presupposes that there is a correct classification, so perhaps I should say more useful classification - of a class hierarchy in a project, and have not been able to find something adequate. Whether this was due to our stupidity, or whether this is because there is indeed no clear way to express the relationship in hierarchical terms, the fact remains that it made inheritance an unsuitable way for us to model our problem domain.

> Often you can fix an IS-A by simply refactoring your graph

Your use of the word suggests to the reader that this is an easy undertaking. In practice I have found this is often note the case. Saying that we CAN refactor the inheritance tree doesn't mean that it is simple to do, and I posit that by the time you have learnt enough about your domain to recognise you have modelled it incorrectly, the hierarchy is of such complexity that this is generally a very difficult undertaking. Again, I'm not arguing against refactoring, but rather that refactoring inheritance hierarchies is often difficult.

I think ericHosick (above/below?) says it best:

> In the real world, people can build out classification systems and easily add in edge cases when we find new ways things can be classified. In software, that could lead to a complete change in the software architecture.

My experiences are obviously anecdotal, but since I find that I - and many other software developers whom I talk to - struggle to make domain modelling using class hierarchies work, his argument that is inherently flawed rings true.

> That aside, I feel it's unfortunate that he chose to illustrate his point by way of analogy, as it seems some of the arguments here are refuting the analogy and not his original point.

I disagree. I think it's extremely fortunate. You are acting as if you know all the answers - and you may even think you know all the answers - but I think refusing to consider that biology went through something similar (and I bet there were numerous biologists who wanted to destroy the classification system as it was 'wrong') and found an answer that differs with yours does not mean there is no link. You could be wrong too. Maybe we are missing tools to allow us to alter our classifications later and that classifications are the best solution when given the correct tools.

Okay, having read the article, I think it goes too far.

Inheritance hierarchies have their issues, and raganwald touches on them, but there's a strawman argument here.

(Incidentally, raganwald, I've noticed this about all your OO articles. You seem to have a bias against class-based design. It's causing your essays to be less brilliant than they could be.)

Fundamentally, you can think of inheritance as a special case of composition. It's composition combined with automatic delegation.

In other words, if you have A with method foo() and B with method bar(), "A extends B" is equivalent [1] to "A encapsulates an instance of B and exposes 'function bar() { return this._b.bar(); }'."

This is very useful when you want polymorphism. Writing those delegators is a pain in the butt.

More importantly, it tells us how to use inheritance safely. Only use inheritance when 1) you want to automatically expose all superclass methods, and 2) don't access superclass variables.

Now, JavaScript does have the specific problem that you can accidentally overwrite your superclass's variables, and that's worth talking about. But I think that saying "inheritance is bad" goes too far. The article would be stronger if talked about when inheritance is genuinely useful, the problems it causes, and how to avoid them.

Edit: In particular, I want to see more about polymorphism. Polymorphism is OOP's secret superpower. Edit 2: I'm not saying polymorphism requires inheritance.

[1] Not quite equivalent.

>> In other words, if you have A with method foo() and B with method bar(), "A extends B" is equivalent [1] to "A encapsulates an instance of B and exposes 'function bar() { return this._b.bar(); }'."

Your point would hold only if all subclasses agreed to use only the public interface of their parents. But if you do that, your "inheritance" isn't really classical inheritance any more, it's just something that saves you typing when implementing delegation, like ruby's method_missing.

The article is not taking issue with composition + "an easy, terse delegation mechanism." The article is taking issue with actual inheritance: the sharing of private state between class and subclass. Your claim that the two things are equivalent just isn't true.

Sharing private state is not fundamental to inheritance. That's my point. It's a bad idea and people who use inheritance (should) know better than to do that.

That's why I said it was a strawman argument. (It's also a bit hypocritical: raganwald says, "JavaScript does not enforce private state, but it’s easy to write well-encapsulated programs: simply avoid having one object directly manipulate another object’s properties." Somehow he fails to apply the equivalent principle to inheritance.)

It's not a strawman: most people who use inheritance directly access properties defined in a superclass. It's a really, really common problem: in fact, most programmers, and most books, don't consider it a problem at all -- they just consider it "using inheritance."

What is happening here is that you are redefining "inheritance" in a new, more restricted way that is not in line with common usage, and then saying "But real inheritance doesn't have these problems...."

> most people who use inheritance directly access properties defined in a superclass

i think the problem is you're both just running on anecdotes.

to add fuel to that fire: i would definitely side with jdlshore on this one. in Objective-C, for instance, you cannot even access a superclass's private properties[1]. most every team i've worked on has avoided protected properties (for languages like Java which even have them) and encouraged even subclasses to talk to their superclass via the superclass's public interface.

[1] of course, you can always declare a category for the superclass which can expose whatever it wants, but subclasses are in no sense privileged in being able to do this.

It's a strawman because his argument is "avoid class hierarchies" but it is not "class hierarchies" that gives rise to this problem it's accessing the private state of another class.

His concluding remarks about class hierarchies are: "Class hierarchies create brittle programs that are difficult to modify.", but he hasn't established that - he's only shown that ignoring encapsulation within a class hierarchy creates brittle programs.

If he wants to argue that class hierarchies encourage that sort of behaviour, and should therefore be avoided, then he is welcome to, but he didn't make that argument.

His evidence only shows that breaking encapsulation is bad, even if it is contained inside a class hierarchy. That make it a strawman because the thing he has torn down is not the thing he is arguing against.

He's established both things. First, that class hierarchies are bad when they break encapsulation (he calls this the engineering problem with them).

But he also argues that they don't accommodate change well by their nature (he refers to this as the semantic problem):

>> Furthermore, the idea of building software on top of a tree-shaped ontology would be broken even if our knowledge fit neatly into a tree. Ontologies are not used to build the real world, they are used to describe it from observation. As we learn more, we are constantly updating our ontology, sometimes moving everything around.

>> In software, this is incredibly destructive: Moving everything around breaks everything. In the real world, the humble Platypus does not care if we rearrange the ontology, because we didn’t use the ontology to build Australia, just to describe what we found there.

> He's established both things. First, that class hierarchies are bad when they break encapsulation (he calls this the engineering problem with them).

His exact quote was "Class hierarchies create brittle programs that are difficult to modify."

But he has not demonstrated the first part of that except in one specific case. That case may (?) be common, but it not inherent in the problem - class hierarchies do not require breaking encapsulation.

His conclusion is not supported by his argument because his argument applies only the the strawman he created, and not to the general case to which his conclusion refers.

If I may try to split the difference: His argument applies much more when trying to do OO in JavaScript, because it's much harder to avoid using the parent class's data when you don't have private variables.


But that's a universal issue with JavaScript OO in that it provides no direct language support for encapsulation.

Developers have, for the most part, learnt to be disciplined about not accessing "private" fields in JS objects. That they (we) have not learnt to apply that lesson with respect to class hierarchies is evidence for how willingly we throw away good principles when we are working in a "special case".

It is also a lesson in why having language features that force developers into good practices is sometimes a net win, even though we might also rally against them (because we dislike their verbosity and/or hand-holding).

If you don't share private state, then why not just make the leap and switch to typeclasses instead? Then you don't have the restriction of having to implement all the interfaces for a datatype in the same compilation unit.

You're describing a fairly narrow subset of subclassing; notably one that is (as you point out) almost equivalent to composition. I think you're right: this is not a bad thing. However, how useful is it really? The kind of API's I find myself wanting to pass-through are generally small and abstract (and where they're not, I really wish they were).

Languages that make this special form of encapsulation easy suffer for it indirectly (I'm thinking esp. of java/C# here). Auto-passthrough allows huge API's that would otherwise be unwieldy. Unfortunately, inheritance isn't enough, and when that happens, the poorly designed api's with huge surface areas encourage really bad hacks. All in all I'm not convinced that api pass-through is really a net win for a language. There are alternatives too, such as mix-ins or extension methods, that would allow you to manually pass through only the smallest truly necessary core, and just re-mixin the extras.

But this is all about the best-case for inheritance. Inheritance in common usage also contains two other less ideal aspects, however: semi-private methods, and virtual methods (method overriding). I'm skeptical either has value. Overriding almost necessarily means tight coupling - you need to understand at some level how the internal state of the superclass works to replace calls (even those that the superclass itself makes!) to methods of the superclass. Protected methods suffer from a similar problem - what kind of api is public enough to allow access by a subclass but not public enough to allow access by a wrapper? Of course, using protected methods makes the previously described problem of discouraging bad design even worse; it makes it even harder to write a wrapper, necessitating inheritance even when that's perhaps not exactly what you want.

Finally, if all you want is polymorphism, you just need interfaces, not implementation inheritance.

I think you're right in pointing out that the OP's post doesn't do OOP's subtlies justice, but beyond interface inheritance, I don't think I care much for other aspects of inheritance.

* Overriding almost necessarily means tight coupling - you need to understand at some level how the internal state of the superclass works to replace calls (even those that the superclass itself makes!) to methods of the superclass.*

Your problem seems more with badly constructed public apis rather than overriding itself. The only public methods of a class that should be overridable are those with a simple and well-defined expected behavior. Overriding doesn't have to imply tight coupling. So long as it adheres to some expected contract (e.g., input/output ranges and exception handling), an overridable method should be just as much a black box to the parent class as the subclass. An overridable method with no side-effects is equivalent to an initialization parameter to an encapsulated object.

* Protected methods suffer from a similar problem - what kind of api is public enough to allow access by a subclass but not public enough to allow access by a wrapper?*

You have to think of the consumers of the class. Consumers that only need to use the class and are satisfied with the publicly available interfaces only need access to the public methods. Consumers that want to change the behavior of the class (often for the benefit of other consumers) will override the protected methods. Protected methods offer a set of extension points to consumers while providing useful default behavior for those that do not need them.

> You're describing a fairly narrow subset of subclassing; notably one that is (as you point out) almost equivalent to composition.

I'm not sure why you say this is a narrow subset. I'm describing a way of thinking about inheritance (inheritance is like composition + automatic delegation). That way of thinking can be applied to any subclassing operation, and I think it's instructional to do so. It can help you see what's a good idea and what's not.

Let's run down the list. Assume you have a class A with a method foo() and class B with method bar().

The equivalent* of "A extends B" is:

    function A() {
      this._b = new B();              // equivalent* to calling superclass constructor
    A.prototype.bar = function() {
      return this._b.bar();           // manually delegate
Now let's say A accesses its superclass's private variables in order to do something. The equivalent* is:

    A.prototype.foo = function() {
      return this._b._privateVar * 2;     // obviously bad, don't do that!
You wouldn't access the private variables of an object you compose; it's obviously bad form. Don't do it when you inherit, either.

Now let's talk overriding the superclass's methods. There are several different ways you could do that. The equivalents* are:

    // Replace superclass method
    A.prototype.bar = function() {
      return "my own thing";            // obviously fine
    // Replace superclass method and access superclass state directly
    A.prototype.bar = function() {
      return this._b._privateVar * 2;   // obviously bad, don't do that!
    // Extend superclass method
    A.prototype.bar = function() {
      var result = this._b.bar();       // equivalent* to calling superclass method
      return result * 2;                // perfectly okay

    // Extend superclass method and access superclass state directly
    A.prototype.bar = function() {
      var result = this._b.bar();       // equivalent* to calling superclass method
      return result * this._b._privateVar;   // obviously bad, don't do that!
Don't access the private variables of your superclass (or objects you compose) and you'll be fine. Sure, you'll be in trouble if the superclass changes the semantics of the parent method, but that's true of all functions everywhere. If the semantics of a function you're using changes, your code probably just broke. It doesn't matter if the function is defined in a superclass or not.

The one thing that's unique to inheritance is the idea of semi-private ("protected") methods that are only visible to subclasses. I agree that they're something to be used sparingly, but they're no different than any other superclass method in how they should be used and overridden. It's a moot point, though, because JS doesn't have them.

*Not exactly equivalent, but close enough for these examples.

I think protected methods are just a bad idea in every case. They don't really offer protection (because a subclass can expose them) and they prevent other forms of composition even where they're more appropriate.

If you're open to inheritance, you're necessarily open to composition - making that messy serves no purpose.

As to this example:

    // Replace superclass method
    A.prototype.bar = function() {
      return "my own thing";            // obviously fine
This is NOT fine - it's really quite nasty. You're breaking encapsulation by affecting how the internals of B work - calls from B's code to bar() will now fail to work as expected. What you want is to use your bar to outside code, but not affect the encapsulation of the superclass.

And note that none of this mitigates the pit-of-complexity that inheritance encourages as described in the post you replied to (i.e. bloated, hard-to-wrap API's). There are inheritance-like techniques that work better and don't have the downsides.

I don't agree, but I'm going to let it drop. I just wanted to reply to say "thanks" for engaging in a thoughtful conversation.

Somebody's been downvoting thoughtful posts like yours (because they disagree with them, I guess). I wish they'd stop and I wanted to let you know it wasn't me. :-)

Hey, thanks for the friendly sign off! A much nicer way to end a conversation.

Given the subtly of these issues (what is the right design of such an artificial construct), I guess the only obvious thing is that there's no obviously right answer - let alone that it's easy to explain the pros+contras in a reasonable amount of time on an online forum such as this :-).

"Unlike method invocation, inheritance violates encapsulation" Item 16 of Effective Java 2nd Edition. To the original article, I would improve the wording so that it is clear whether the author is against class hierarchies in Javascript or classes in Javascript. It is not clear to me.

One practical example: In Javascript (CoffeeScript), null values can propagate for a very long time. Calling a non-existent method throws an error immediately, while using a nonexistent field (because it changed in the super class) is not that easy to track down. From my CoffeeScript experience, almost any inheritance brought us a headache when we rapidly iterated on our code - but that doesn't mean that CS class construct isn't useful in defining recipes for well-encapsulated objects.


Your invocation of Effective Java made me look for Effective Javascript, and it does exist. Amazon users give it five stars: http://www.amazon.com/Effective-JavaScript-Specific-Software...

Can anyone comment on how well this books fulfills the expectations implicit in a book calling itself "Effective X?" Or just how effective the book is with respect to accepted javascript practice?

I haven't personally read it (shame on me!) David Herman and I spoke from time-to-time while I was at Mozilla and he's very smart, a very clear communicator and he knows the language as only a language lawyer can. He's actually a member of TC-39 (the ECMAScript committee) and had a big hand in the modules coming in ES6.

By every account I've seen, it's a great book and the only reason I haven't read it is that I've already been bitten by all of JS' pitfalls once or twice :)

The real reason I'm posting a comment, though, is to give you a link to the JS Jabber episode in which they talk to Dave about the book:


That episode will give you a good idea of what the book is like.

It's a very good book. It explains a lot of 'why' and inner working of good practices.

It's also fairly comprehensive, ranging from - some evilness (type inference with ==, eval and its performance toll) - functions and higher order functions - objects and prototypes. Some good explanations of the all prototype and constructor thing - array, dictionnary and some things to know about their prototypes - api design and concurrency

It's 200 pages full of content, I recommend it.

Polymorphism can be acheived without using inheritance.

See Clojure's Protocols or Haskell's type classes for examples of this.

Two points here regarding Haskell. First, a function with typeclass constraints is less polymorphic (in that it will operate on fewer types) than polymorphic function without typeclass constraints. Of course, [the fully polymorphic version is] also more limited in how it interacts with the corresponding values.

Second, parametric polymorphism in Haskell is statically resolved. You can have polymorphic functions, but any given container still contains a single type. You can still do dynamic polymorphism in Haskell (by storing a list of records of functions, rather than storing data directly) but it doesn't typically involve type classes.

The second point brings up an issue that confused me when I was first learning Haskell; maybe I can help others that are similarly confused. Coming from OO languages, the lack of heterogeneous containers seems painful in Haskell - after all, in OO languages you use containers of superclass or interface pointers all the time. Haskell has a different approach to handling the same problems, though, and it turns out there are several ways you can create (or eliminate the need for) heterogeneous containers.

The idiomatic way you'd create a "heterogeneous container" is to store a single algebraic datatype with different constructors, rather than try to store different types at all. This doesn't actually give you a heterogeneous container, of course, but it works perfectly in most cases, because in most cases the set of things you need to store in the container is closed. You really only need a truly heterogeneous container when you need the ability for someone else to come along and extend that set. Concretely, if you're writing a ray tracing application, you know all the possible shapes you may need to handle, and this approach is perfect.

On the other hand, if you're writing a ray tracing library and you want the library user to be able to define new shapes, you may want to consider another approach. The idiomatic approach here was already mentioned by dllthomas: this is a functional programming language, so use functions! Specifically, use a record of functions, with each function in the record serving the same role as a method in OO. The functions can have private data by closing over it.

Haskell has a couple of other options available, as well.

You can use existential types, but they don't really buy you anything over the record of functions approach other than perhaps looking superficially more like how you'd do things in an OO language. With this approach you define a typeclass and make all the types you want to store instances of it. The container then stores instances of the typeclass, rather than a concrete type.

You can also use Data.Dynamic to create dynamic values, which will allow you to store a truly unconstrained mix of types in the same container. Since you have to cast the dynamic values back to their real type before using them, though, this isn't a great solution - you end up with code that looks similar to chains of 'instanceof' in Java or 'dynamic_cast' in C++.

> The idiomatic approach here was already mentioned by dllthomas: this is a functional programming language, so use functions! Specifically, use a record of functions, with each function in the record serving the same role as a method in OO. The functions can have private data by closing over it.

This is also how one does OO programming in C: just roll your own v-table using a struct of function pointers. You don't get to close over an environment in that case, so you have to be careful to pass everything in.

There are similarities, to be sure. One difference is that state is more often closed over than passed around explicitly.

records of fuctions vs typeclasses: In general, you use a class when you want a different type for each different behavior, and there's only one sane behavior choice for each type.

You can use GADTs for this. For example:

  data Showable where
    Showable :: Show a => a -> Showable
This allows you to create a polymorphic container like [Showable 5, Showable "hello"] where the polymorphic type is constrained to be a member of the Show typeclass.

Note that GADTs are a bit overkill for this. All you really need is ExistentialQuantification. GADTs are ExistentialQuantification + TypeEqualities.

You can have polymorphic functions, but any given container still contains a single type.

This is the case for all typed programming languages, not just Haskell. So-called dynamic languages like Python, Ruby or Javascript are merely static languages with but a single type[0].

[0] http://existentialtype.wordpress.com/2011/03/19/dynamic-lang...

In C++, if I have a

    std::list<Parent *>
then some of those pointers may actually point at a Child. The client code doesn't care. This is an important kind of polymorphism.

In Haskell, you can't reasonably express this with typeclasses, which surprises folks new to Haskell. You can still express it (as I mentioned), it just takes a different form (and precisely which form is best can vary with other considerations).

I don't know what you consider reasonable, but you can express this in Haskell with type classes and existential types:

   data P = forall a . Parent a => P a
   type PList = [P]
where Parent is a type class.

You have to be a bit more explicit when using a Child in the position of a Parent when adding to the list and you have to use (fully polymorphic!) pattern matches to extract elements from the list, but personally I consider this a good thing as it's more explicit.

Aside: Existential types is what OOP interfaces are -- and interfaces are often encoded as such in language semantics; see e.g. Types and Programming Languages (Pierce).

EDIT: Typos in code -- unfortunately my Haskell is a little rusty :(.

Yes, "Haskell can't reasonably express this with just typeclasses" is probably what I should have said. With the right extensions, Haskell can do anything, but it's not always going to be a good idea...

What's with the weasel words? Existential types aren't even remotely controverisial or dangerous.

In Haskell you don't need to express this with type classes as it is trivially covered by sum types. Type classes are mostly syntactic sugar providing for ad-hoc polymorphism. The real power is provided by the underlying algebraic data types.

Sum types only cover this trivially when the type is closed or when those adding new subtypes can be expected to modify all uses of the type. It's still not the same thing.

The need for an open sum tends to be vanishingly rare in my experience.

It occurs moderately frequently in libraries where a user should be able to define domain specific types along with how the library should treat those types. A classic example would be a raytracer where the user might be adding new kinds of scene elements. It probably shouldn't occur in application code.

For what it's worth, I do think people underestimate the applicability of closed sum types.

I'm aware of the raytracer library example. The solution to that is to not use types to represent shapes. Instead, classify shapes by primitive types (triangles, quads, bezier curves, NURBS, etc.) and use a closed sum for those. It's far less common for a user to want to create a new primitive type and you can always use an escape hatch in the closed sum that allows the user to define their own primitive along with a function to draw it in terms of one of the other primitives.

This thread got silly a while back. I'm abandoning it.

> Of course, it's also more limited in how it interacts with the corresponding values.

This is backwards. Polymorphic values without constraints admit almost no operations at all - you can "copy" them, that's it. This is true to the point that (save diverging) given f :: a -> a, f can only have one meaning.

Invariance of sequence elements is also far less problematic in ML-family languages because they have sums.

"It" here being the fully polymorphic version. Rereading, it does seem misleading (or at best ambiguous) so I've expanded the pronoun.

I see. Sorry, I should have interpreted that more generously.

You're forgiven - certainly calling out the lack of clarity was important!

I haven't really explored this area of Haskell, but I think there are certain cases where this is possible. For example, I think there might be a way to have a list of tuples of `forall a. [(a -> b, a)]`, where a's type can vary, but applying the first element of the tuple to the second will always produce a `b`. I'm not sure if this is actually the case but it seems (theoretically) possible, and certainly would be convenient. More experienced Haskellers feel free to chime in...

Yeah, that's certainly possible. It's also largely frowned upon because it's usually over complex. For instance, in your example

    [exists a . (a -> b, a)]
is literally completely equivalent to

as the types ensure there is no other thing that can be done with those pairs.

The convenience factor is thus almost never the case. There are some nice theoretical properties and a great embedding of OO in Haskell via existential typing [0], but it should rarely be used.

[0] http://www.cs.ox.ac.uk/jeremy.gibbons/publications/adt.pdf

OK, since I didn't give a very good example, let me try to show a better one. Let's imagine you're writing a testing library. You have a series of tests; each one of them takes in an input, a function to run on the input, and an expected output.

    data Test i o = Test String i (i -> o) o
and then say your testing function is something like

    runTest :: Eq o => Test i o -> IO ()
    runTest (Test name input f output) = case f input of
        o | o == output -> putStrLn $ name ++ " passed"
          | otherwise   -> putStrLn $ name ++ " failed"
Then let's say you had a bunch of tests. For example, you want to test that addition works:

    test1 = Test "addition" (1, 2) (\a b -> a + b) 3
And you want to test string concatenation:

    test2 = Test "concat" ("hello", "world") (\a b -> a ++ b) "helloworld"
Then you could write your tests as

    doTests = runTest test1 >> runTest test2
Now if you have a lot of tests, it would be nice to put them in a list:

    doTests tests = forM_ tests runTest
However, this would require that every test have the same inputs and outputs. You couldn't do

    doTests [test1, test2]
Even though the resulting type is known (it will be an IO () regardless), and even though runTest will operate on each one, because test1 and test2 have different types, you can't put them all in a list.

I think that `forall` and similar allow you to get around this restriction somehow, but I don't really know how that works.

I mean, I agree such examples exist. I don't think this is yet a truly good example, though. The real advantage to existential types like this are in creation of multiple variants---again I recommend reading Jeremy Gibbons' paper.

But, for completeness, here's how you could write your type

    {-# LANGUAGE ExistentialQuantification #-}
    data Test = forall i o . Test i (i -> o) (o -> o -> Bool) o
Although, note, this is exactly equivalent to `Bool`, although in two ways—if we knew the comparator function was commutative then there'd be just one way to convert to `Bool`.

    testBool :: Test -> Bool
    testBool (Test i fun cmp o) = cmp (fun i) o

    testBool' :: Test -> Bool
    testBool' (Test i fun cmp o) = cmp o (fun i)
But in either case there are no other ways to "observe" the existentially quantified types since we've forgotten absolutely everything besides `Bool`. More likely we would want to also, say, show the input.

    data Test = forall i o . (Eq o, Show i) =>
                Test i (i -> o) o
and this type is now equal to `(String, Bool)`.

    testOff :: Test -> (String, Bool)
    testOff (Test i fun o) = (show i, fun i == o)
So, in general, if you're using existential types you really want to either be using multiple variants or when you have such a combination of observables that it's not worth expressing them all directly.

Somebody has also made a JavaScript library for this: https://github.com/Gozala/protocol

Agreed about raganwald's bias; surprised to see such a naive (IMO) "don't use class hierarchies" post from him.

Your point about "don't access superclass variables" is also spot-on--I don't see how he misses that "self-enforce not calling other objects' properties" (because the language doesn't do it for you) is really not very different than "self-enforce not calling superclass properties".

Per his article, I agree that fragile base classes are a problem, but not every base class is automatically fragile--you can design an API for subclasses (in Java/C# worlds, by being very explicit/thoughtful about what you make private vs protected), just like you design an API for external callers.

Yes! What's wrong in the original article is that the the subclasses have inherited access to all the base classes' internals. There is no reason to allow that at all - the base class should hide it's implementation and allow subclasses to specialise via public methods. One could an interface for this if it was useful - which it would be for a library designer, for example.

Except that you can't do that in JavaScript, which is the language the article is (directly) talking about.

But he takes lessons from a language that has no access control to member variables, and tries to apply them to all OO languages. That's the problem with the article, IMHO.

Yeah, I agree. I didn't make that clear and I should have.

    > This is very useful when you want polymorphism.
    > Writing those delegators is a pain in the butt.
What's a scenario where you'd be exposing a large number of delegators? Reading this made me think - maybe your class hierarchy needs to be abstracted more deliberately into structs-with-interfaces vs domain logic classes. (It's more likely I just haven't thought about the kinds of problems you are thinking of.)

What about something like Java's `AbstractList` and `AbstractSet`? These save you a lot of typing when implementing the huge `List` and `Set` interfaces.

Fair point. The problem is solved in Haskell by having default implementation in the typeclass, leaving the programmer with only a few methods to implement (eg, 'equals' is defined in terms of 'unequals' and vice-versa, implement the one you want to get the rest of the typeclass working).

That said, that's a fairly rare case. Classes with such a large surface are often a code smell.

I think that part of the problem is that while JavaScript can do OO somewhat, it is not fundamentally an OO language, and if OO is the first tool that you reach for, you are likely doing JavsScript poorly.

JavaScript is a dramatically OO language. It's just not the OO you're used to.

Everything* in JS other than numbers, strings, and booleans are objects. Functions are objects.

See http://www.objectplayground.com for details. (Temporarily down due to server problems, but hopefully back up soon.)

*Not really everything. Not objects: undefined, null, number, string, boolean. Objects: object, array, regexp, function, everything else.

What definition of OO doesn't include encapsulation?

And encapsulation in JS is fundamentally broken - everything is public.

Yes! I most emphatically agree with this. I do a lot of OO programming in Java or C++ but I think its a horrible idiom for JS. Although I also think prototypical inheritance in Javascript is also pretty ugly :o) For me, JS seems to work best when treated like a poor man's functional language using libraries like underscore.

> For me, JS seems to work best when treated like a poor man's functional language

I agree.

Others have replied (rightly) that under the hood, JavaScript has a lot of objects. Functions are objects. So in that sense I am wrong: JavaScript is an OO language.

However the experience of programming well in JavaScript feels more like using a functional language than using a OO language. Good JS has a lot more to do with thing such as passing functions to functions or understanding how " fn().then(fn()) " works, than it relies on class hierarchies, protypical or classical.

Javascript is fundamentally an object-oriented language, it's just prototype-based rather than class-based.

We do something like classes in JavaScript without touching prototypes, and it works pretty well as a natural way of organizing and encapsulating code.

Private members are variables/functions defined within the constructor's closure. Public members are properties added to "this" by the constructor. Mixins can be done by calling another class's constructor on yourself.

The point is that class-based OO can be trivially imposed on JavaScript objects without abandoning the native object construction mechanism like ember.js does. In fact, CoffeeScript does this in order to implement its own classes.

It seems more like a functional language to me, but YMMV: http://stackoverflow.com/a/501053/5599

You seem to be under the impression that those are somehow mutually exclusive.

I'm sorry if you though so. I am not under that impression.

However as mentioned elsewhere ( https://news.ycombinator.com/item?id=7500280 ) if most of what you do to organise your code involves passing functions to functions and very little of it involves creating class hierarchies or prototype chains, it's a fair assessment that the language that you are using is more functional than OO.

The language that you are using may be a subset of the whole language, but with JavaScript that's given - you have to find the good parts or go mad trying. I was wrong about JavaScript as a whole, but maybe less so about JavaScript as it is successfully used.

It's both functional and object oriented. Also, see Scala for good combination of functional and OO.

Inheritance is easily overused, but that doesn't mean that we should just avoid it altogether.

The problem IMO is that we are stuck in a view that inheritance is really about ontology, when what we really mean, and want, is code reuse. It's very hard to make a 5 level deep ontology not break down. This is why we have this whole 'prefer composition over inheritance' business.

But while we are stuck with that kind of mindset in Java (at least pre java 8), we miss the capability of using inheritance as a way of adding mixins. There is much power in using mixins as a way to clarify composition, while we still keep the inheritance tree very shallow.

That's one thing we get with judicious use of the Scala cake pattern, which you could easily reproduce in javascript: Composition without really having to write a bunch of useless boilerplate. There's a nice talk out there about it. Cake Pattern: The Bakery from the Black Lagoon.

The trick, as with most other programming techniques, is to use it carefully.

I'm coming at this from the view of an Objective C programmer, which was heavily influenced by smalltalk.

Having written Objective C full-time for 5+ years, it is rare to need to use inheritance in classes that actually implement application level logic, however it's used in just about every class you make as you inherit from Apple's base. (UIViewController, UIView, etc..) It's useful to subclass and add categories as needed. At a functional level it's not an issue, however with that in mind I'm not sure it's a great pattern for Javascript.

Not trying to be tangental, but I believe this is true with EmberJS which was originally modeled off of Coaco. You inherit from Ember's base Objects (ArrayController, ObjectController, etc) and rarely find yourself extend classes that you write. Here is a hierarchy of the "base" classes. http://emberjs.jsbin.com/bahetoka/1

"what we really mean, and want, is code reuse"

I'm not sure if you meant it this way, so I'll underscore this point. Code reuse is truly the worst "reason" to use inheritance, and in fact an anti-pattern.

Composition is the only proper way to reuse code because composition is explicitly stating you are using the composed code.

You're taking it too far. One should prefer delegation/composition in most cases, but if you're trying to direct users to a restricted reuse model (inheritance), then it works fine and is often a much clearer approach than "Joe's custom delegation model".

There are plenty of examples of this in practice - someone pointed out NeXT/Apple's UIKit for example elsewhere in this thread (which also uses composition where appropriate). The Java collections hierarchy is also a good one (LinkedHashSet < HashSet < AbstractSet).

"Composition is the only proper way to reuse code because composition is explicitly stating you are using the composed code."

And every other pattern is not suggesting this very same thing?

I disagree that what we want is code re-use. I'd say we rather want encapsulation and single responsibility.

I strongly agree with your main point though: that the way we get "good" OO design is through composition and mixins

Good article, but I think the real message isn't to not use inheritance, but to use with great care.

Inheritance gives you a great characteristic -- isa relationships. And this is something you don't get with composition.

That said, the fragility with poorly constructed base classes is real. But a succinct base class can be very valuable and useful, and not that brittle. Just don't stuff cruft in it that is of questionable value. Ask yourself what's the least you can put in the base class and still provide value.

And this is where interfaces are also useful. You can get the isa relationship w/o much of the brittleness as there is no internal state associated with the interface. But you are still creating a hierarchy (just not of classes, but of interfaces).

It's a useful article, especially for those new to the idea. But the takeaway should be to use care. Not to avoid at all costs.

> But the takeaway should be to use care. Not to avoid at all costs.

I think the takeaway here is to not try to use JavaScript as if it was some other language that has support for classical inheritance, but rather to embrace the prototypal inheritance model and compose your apps in a different way. It's not a "GOTOs considered harmful"-style rage post, but rather a warning to budding JS developers who are coming from a more classical perspective. It's not that you "shouldn't use classes", it's that you shouldn't build complex hierarchies and type systems. Keep it simple!

[edited because apparently I can't read; kenjackson did mention interfaces]

I think it's still a bad idea - as you note, isa is available without necessarily having inheritance; we know inheritance is dangerous, so why include that?

In languages like C++, C# and Java, it has long been expert advice to avoid inheritance where possible, and use interfaces. But the language and language culture is leading everyone astray. Defining interfaces is usually harder or stranger-looking, and is taught as an adjunct to inheritance.

Other languages get this more right. Haskell has what it calls typeclasses, but they are more similar to interfaces in the aforementioned languages.

We seem to be making exactly the same mistake in the ECMAScript standard, except it's even worse; you can't define an interface even if you wanted to.

> Haskell has what it calls types

I think you mean typeclasses?

One of the things that always bothered me about interfaces was the inability to define a default implementation, especially when developing UIs. A proper mixin system (like what's used by React's components) has solved this particular problem for me.

> typeclasses?

Fixed. Thanks.

I agree with you about Haskell. I almost went into a discussion of how Rust does this. They don't have subtypes, but they do have code reuse if you explicitly declare it. Caution, I'm just an admirer of Rust, not a user. So maybe it sucks too, but this seems closer to how things ought to be.


btw, Rust will likely add virtual functions with inherited implementations because it makes some Servo code easier to write and more efficient:


Haskell does allow you to define default implementations, though. In at least two different ways.

What would "defining an interface" even mean in a non-type checked language? I'd say: ECMAScript already has better interface support than Java has because it supports implicit interfaces. In Java two classes can only be interface-compatible if they have a shared dependency/are at least indirectly coupled. In a language with duck typing and without static typing interfaces are present implicitly.

You're correct that adding interfaces to JavaScript would also require adding optional type declarations. But there are lots of languages based on JavaScript, or which compile to JavaScript, which offer optional type declarations. It seems to me they could have gone down this path.

I don't agree that it's better to have an "interface" which exists purely in the head of every programmer working on the code. Languages can be doing more for us, and it doesn't have to be the kind of verbosity that Java offers.

Maybe that was a misunderstanding: I didn't mean to say that interfaces should exist purely in the head of the programmer. I did say that implicit interfaces/protocols (Go, Objective-C) are better than interfaces that need shared dependencies (Java/pure virtuals in C++). If I had to order it I'd say: duck typing better than explicit interfaces; implicit interfaces that are backed by static type checks "better than" duck typing. The "better than" in quotes for the latter because it really is more about how much static analysis the language supports and it's more a matter of opinion.

But I'm relatively convinced that interfaces/protocols that require coupling of implementations are clearly inferior to the ones that are implicit.

Probably a lot like Clojure's protocols, which are precisely that. Effectively, functions declared in a protocol are just lookup tables that resolve to a particular implementation based on an object's type at runtime. You can query a value at runtime for whether it implements a protocol, too.

A statically typed language enforces an interface, but an interface is just a concept. All it does is ask: does this object respond to the following methods? You and your team can ask that question, or the type checker can.

How am I conflating them when I specifically say you can get isa without inheritance?

Okay I reread this and I don't know how I missed the second section. I assume you didn't add that later. Sorry I am jet-lagged. Revising comment.

It was in the original. No prob. I misread all the time too, without jet lag as an excuse even. 😊

You can get the isa relationship w/o much of the brittleness as there is no internal state associated with the interface.

You can even get the relationship without brittleness but with some internal state, as long as the superclass wraps the internal state in a property. For example, in Python, the currentBalance private state would obviously be implemented as a property, not an instance variable, so that it could be mutated by subclasses while still hiding the private implementation in the superclass. (This would be harder to do in JS, I admit, since JS doesn't have anything corresponding to Python's properties.) So the lesson here isn't really "don't use class hierarchies", it's "don't use class hierarchies stupidly".

With some things/people/cultures, it's not enough to say "carefully evaluate your use case and then make a decision based on the pro-con balance" -- even though that's frequently the place you want people to arrive at -- because practice is so far out of balance.

In the main it seems that classical OOP with problem domains modelled in inheritance heirarchies is what's been taught as the standard to aspire to for the last two decades. The fragile base class problem and other criticisms aren't unknown, of course, but awareness doesn't seem to have dispersed as widely as the practice of lots of subclassing, which is usually introduced as an essential OOP component in the first few lessons.

It might be too much to say "don't use inheritance, ever." But it might not be too much to paraphrase Knuth about optimization and give people two rules regarding it: (1) don't use it (2) (for experts only) don't use it yet.

I think the takeaway is often that inheritance doesn't give you isa relationships; it gives you one tool for building isa relationships. So long as you use it judiciously and ensure that you're creating a genuine supertype then things are OK.

There are some contradictions here. For example the author the following statement about encapsulation violations being permitted, but not followed as best practice:

"JavaScript does not enforce private state, but it’s easy to write well-encapsulated programs: simply avoid having one object directly manipulate another object’s properties. Forty years after Smalltalk was invented, this is a well-understood principle."

However, the author doesn't seem to really understand this as he makes the case that access to private state and behavior of a "superclass" violates encapsulation:

"In JavaScript (and other languages in the same family), classes and subclasses share access to the object’s private properties. It is not possible to change an implementation detail for Account without carefully checking every single subclass and the code depending on those subclasses to see if our internal, “private” change will break them."

Well, yes they do allow access, but that doesn't mean you have to use it! This is considered bad practice when extending any class in other languages that I'm familiar with (C++, Ruby). Please take some of your own advice.

I do agree that hierarchies do not fit the real world as well the contrived examples from my first OOP classes and they should be used with extreme caution. Let's not throw out the baby with the bath water, however.

I used to find these types of articles useful. But now I see them more as dogma. Great engineers don't struggle with things like JavaScript inheritance because they understand best practices and trade offs. So in general, I find it more useful to read about best practices and trade offs than "xyz considered harmful" articles that don't present viable alternatives to xyz.

That said whenever I see something on raganwald.com I'll still read it :)


The alternative he presented is not to share private state between class and superclass. This really isn't a "know the right tool for the right situation" kind of thing -- it's almost never a good thing.

You could accomplish this alternative, among other ways, by:

1. Using composition and delegation. 2. Using mixins, if your language supports them. 3. Using inheritance, but depending only upon your superclass's public interface.

The substance of the article is spot on, but I have to take issue with the terminology it uses. The problem he's talking about isn't classes, it's inheritance.

This is an important distinction. Javascript doesn't have classes, but it does have inheritance. The problems raganwald points out with the Account example come not from the organization of the code into pseudo-classes, but from the fact that it uses Javascript's inheritance mechanism.

It would be perfectly possible to write a version of the example that exhibited the "fragile prototype problem," and conversely, one can easily write a version in Smalltalk that uses composition instead of inheritance and thus has no fragile base classes.

It's just as true in other languages. More often than not, programmers thinking about code reuse attempt to solve it via inheritance, which is almost always the wrong answer. With some persistence, you can end up with a nice four or five level deep inheritance tree with 10 methods randomly overriden at various levels in the tree. Good luck figuring out how the damn thing actually works if you didn't write it.

Okay, but that kind of polymorphism is actually a replacement for conditional logic, e.g. switch on type antipattern.

People hate conditionals. People hate polymorphism.

Everybody is wrong about everything pretty much all of the time, it appears.

In JavaScript, like other duck typed languages, you have dynamic dispatch without inheritance.

For example:

    var objA = {doIt:function(){console.log('hello from a')}};
    var objB = {doIt:function(){console.log('hello from b')}};
    var doItDynamically = function(doitObj) {
a and b do not share a common class (since classes do not exist in JavaScript), but they implement the same interface. For this reason, they can be used polymorphically, as if they shared a common base class or interface in Java or C++.

Everybody is wrong about everything pretty much all of the time, it appears.

Misanthropy Driven Development? I would like it if I didn't already hate everyone and everything.

You can usually get code reuse via composition. If need be, you can have your objects implement one or more common interfaces (which may themselves inherit from other interfaces, that's not a problem).

We want to use classes because we want to want to be able to invoke an operation and have the exact behavior determined by the type of the object or objects on which the operation was invoked.

Common Lisp philosophy of classes and OO fits the thicketness of the real world better. Generic functions, multimethods, mixings etc. Just embracing the notion that classes and encapsulation might be orthogonal issue opens up the way to use class system better ways.

I am bit concerned that the standard committee is going to make the language worse while trying to fix it.

Yes it is inconvenient to do traditional OO programming with JavaScript, but I'm not convinced that is a bad thing. Encouraging subclassing, as pointed out by the author, could actually be detrimental.

    I am bit concerned that the standard committee is going to make the language worse while trying to fix it.
AFAIK the goal is write what you mean and not "fake stuffs because the language lacks structure".

Dont want to use any ES6 new syntax, dont use it. And yes , the committee SHOULD meet developpers demands as much as possible. Javascript has no clear paradigm,everything should be on the table.

To clarify, what concerns me is that by adding syntax that is similar to Java, developers coming from other languages (namely Java), won't take the time to understand the differences between the two languages and continue to write applications as they would in Java.

Everyone has their biases, and my bias would be to embrace the dynamic and functional aspects of the languages that separate it from Java, rather than creating new syntax that pastes over the fact that the language is fundamentally different.

I'm a Java programmer and I think classes in JS would be awful. Tacking on the ES6 behaviour will not make JS a sane OO language, it'll make JS a bunch of new, inconsistent behaviours tacked onto the old, inconsistent behaviours.

FWIW I'm with you: I think that treating JS as a functional language is a cleaner fit that crowbar-ing OOP into it.

You don't like TypeScript?

I think it was a valid approach to the problem and I salute MS for it, but no - I don't like it. We considered it at work and felt we were fighting JS's natural tendencies too much.

Our decision is to deal with JS on it's own terms. Clojurescript would be appealing if it didn't involve an AOT compiler (and one that's not trivial to set up in Windows)

The ES6 changes are a mixed bag. We get some nice syntax improvements and an overly complex module system. I'm mostly positive on the new class syntax as it does simplify OO code. However as the author points out it is still the same JavaScript "imitation" objects. I agree with you that it could actually make it more tempting to build complex and fragile object heirarchies.

Class hierarchies are easily fragile, easily do not model things correctly over time.

The cruft, the lack of DRY as the mapping between what is being modeled and what is represented in the code -- accumulates with class hierarchies for sure.

But the idea of inheritance and tree structures has been around and strong for the past 50 years -- not because people are inherently lazy -- but because it models something very primitive about programming: the evolutionary nature of development and thinking.

This is extremely non-mathematical, but extremely non-trivial.

Tree structures (and by extension 'hierarchies') are perhaps the most fundamental way to organize data. There is nothing more sophisticated than a tree search -- it is the basis of all exponential / efficient access times and all knowledge and organization of memory. It's why evolution proceeds in tree structures. We organize data in our mind in tree structures. Life over millennia organizes itself in tree structures.

I'm sad to report that the world will not quickly become clear and abstract and orthogonal a la Haskell or other pure languages. Knowledge, life, everything proceeds via evolution, not something a priori. The sooner one really accepts that, the sooner one has new ways to interface with this reality more practically and effectively.

People sometimes make the point of use the right tool for the job - as a reason for still using OOP, etc. I'd not say that -- but that the right tool for the job where the job is an evolving code base is actually something object-oriented with inheritance. Except where the domain is very precise and known ahead of time. Otherwise, the manner in which it models evolution is actually quite useful (despite the fragility of inheritance and the quick ability to spin out of any orthogonal clarity).

People who program purely in Clojure or Haskell put the burden of this evolution in their own development as a programmer (there is no extension, there is just clarity and then rewrite). That's ok. People who use Java or whatever in the enterprise because it's easier for other people to get on board with it -- put the burden in the code. But the cost of modeling a problem that evolves goes somewhere.

Are you saying that `nature uses is-a hierarchies, hence it's good for source too.' or that `source code that models real world concepts so it should follow the same concepts.'?

Good question, thanks. I think I was conflating both -- but I think both apply.

The benefit of is-a hierarchies -- perhaps the only big benefit (to be weighed against all the other alternatives available) -- is that it solves for making the next 'adaptation' as simple as possible. It optimizes for the very next extension step. It is slightly less conceptually complex to extend an object than to composite a new feature.

It does that at the cost of complexity and brittleness. But in both the real world and and in source maintenance this is occasionally useful.

Let's say I'm a startup practicing lean and experimentation as much as possible. What I want is a framework that allows me to tweak things with as few steps as possible. What I don't want is to refactor every time I want to do an experiment.

Anyway, code I think generally should model our understanding of the problem space and the real world concepts in play. Extension is not a bad way to begin modeling that understanding.. Because growing the first steps (even though they will clutter because there is nothing encouraging orthogonality in OOP besides one's own sense of cleanliness) is often good for the growth of our understanding.

Huge fan of this article. Makes some really great points! Wish I could share and vote it up a thousand times. :)

I have been writing about exactly this topic, of why classes don't make sense (specifically for JS), in my second title of the "You Don't Know JS" book series, "this & Object Prototypes".

In particular, Chapter 6 makes the case for an alternate pattern to class-orientation design, which I call OLOO (objects-linked-to-other-objects) style code. OLOO implements the "behavior delegation" design pattern, and embraces simply peer objects linked together, instead of the awkwardness of parent-child abstract class semantics.


Kyle, the figures (fig4.png, fig5.png) are missing from that page. Other pages too. I'm one of your kickstarter contributors, but I was unable to read & review some of the chapters because significant parts of the text refer to missing diagrams.

They're coming. It's an artifact of working with a book publisher and the art department and all that. Sorry for the confusion.

A rough, hand drawn sketch scanned to gif or png until the official artwork is ready would be 100x better than a broken link.

"objects-linked-to-other-objects" - why not use the established "composition over inheritance": http://en.wikipedia.org/wiki/Composition_over_inheritance

Or does it mean something else?

OLOO is quite different from both inheritance and composition. It embraces delegation rather than covering it up or running away from it. :)

Can you give an actual example of the difference between composition and delegation? In my definition those two are just two different perspectives on the same thing.

A super quick, simple illustration of the difference, which comes down to the fact that delegation is differential, whereas composition is complete:


For a bit more "real world" of a scenario, you can also compare the `LoginController` / `AuthController` mechanism here, first shown with inheritance+composition, and then shown with simpler OLOO-style delegation:


Ah, thanks. That clarifies things. Neat!

"Only, the real world doesn’t work that way. It really doesn’t work that way. In morphology, for example, we have penguins, birds that swim. And the bat, a mammal that flies."

Yes but we have Single Responsibility Principle and while an animal is one object in physical world, it doesn't mean it should be a single object in OOP. Start breaking it down...

public interface BodyType {}

public class TwoArmsTwoLegs implements BodyType {}

public class FourLegs implements BodyType {}

public interface Locomotion<B extends BodyType> { void walk(B body); }

public class BipedWalk implements Locomotion<TwoArmsTwoLegs> { public void walk(TwoArmsTwoLegs body) {} }

public class Slither implements Locomotion<NoLimbs> { public void walk(NoLimbs body) {} }

public class Animal { BodyType body; Locomotion locomotion; }

Animal human = new Animal(new TwoArmsTwoLegs(), new BipedWalk());

(Code sample from an article in Software Development Journal by Łukasz Baran)

tl;dr: Superclasses make your code brittle. https://en.wikipedia.org/wiki/Fragile_base_class

How many people still try to fit their software design into the class hierarchy model? I've been on the composition-not-inheritance side for so long I can't do "traditional OO" justice, but I'd love to hear the counter-arguments in case I'm wrong.

No, you're right. "Favor composition over inheritance" has been the go-to OOP design advice for years.

The distinguishing feature of OO is modeling concepts by encapsulating state and behavior. For some reason, introductions tend to focus on complex inheritance structures instead. I think that's really unfortunate because it sends entirely the wrong message about OOP.

That said, I do occasionally use inheritance for polymorphism and limited code reuse (such as [1]), but I keep it limited in scope, and I don't think I've ever used a multi-level inheritance hierarchy.

[1] In my Let's Play TDD screencast series, I use inheritance when modeling a "Dollars" value object. There's a Dollars base class and then various specializations: ValidDollars, InvalidDollars, and UserEnteredDollars.

Source code: https://github.com/jamesshore/lets_play_tdd/tree/master/src/...

The screencast: http://www.jamesshore.com/Blog/Lets-Play/

Any dependency makes your code brittle. On the other hand, you can't reuse without dependencies. Power/safety tradeoff.

Superclasses are just a special case of this.

More like Superclasses used incorrectly make your code brittle so don't use superclasses.

  In so far as I have never found a use case for a class hierarchy 5 deep I agree with him.  On the other hand the argument that because you can do it wrong means you shouldn't do it all isn't compelling.  I don't think there is a language or language feature that you can't use incorrectly.

Ever since I've started writing Objective-C, a lot of Javascript's glaring issues have started to melt away. Yes, JS doesn't have an understanding of protocol, but that's just a matter of implementing convention and being disciplined. JS is remarkable in that it is so malleable that one can use myriad patterns with it.

I've found that the delegate pattern, for instance, can be incredibly powerful when used in conjunction with JS, especially when it comes to extending classes functionality that may not be inherent to its topology (think class Ant, to describe an Ant, and class FlyingAnimal -- wings, etc. --, to describe a queen ant).

Although I'm very, very sympathetic to the idea that class hierarchies often cause more pain than they're worth, using JS to make that point is a bad idea.

In most actual OO languages, there is a clearly defined interface between a base class and it's inheritors: eg in c# you have protected (to expose state), abstract to force implementation and virtual to optionally allow implementation. In no way are you forced to expose all internal state (even though people often do).

I think there's a case to be made against class hierarchies, and also against using OO in javascript. But I'm not sure Ragan made either of them well here.

What is missing from the discussion here is that where domain knowledge is less important. For example, almost every widget system I have ever looked at uses class inheritance because it makes it relatively easy to manage consistent interfaces across classes. This is true of GTK, wxwidgets, and many more.

It is true that the natural world doesn't necessarily admit of perfect neat classifications generally, much less trees. However, when we are talking about purely engineered solutions, the same arguments don't apply in the same way.

Here's an example. In LedgerSMB 1.4, we use (shallow) class inheritance in a reporting framework. It works, and works well. Reports inherit an abstract class with lots of hooks for customization but a lot of defaults.

In future versions we will likely be moving away from an inheritance-based approach, not because of the arguments here or the maintenance issues (which will crop up any time you rely on external components) but because we think we can create clearer code by moving from a class/inheritance approach to a DSL approach.

I am not sure that class contracts and DSL syntax contracts are necessarily any different from a maintenance perspective other than the fact that the latter strikes me as resulting in clearer code.

Anybody have an original source for "prefer composition over inheritance"? I've heard it for years, but never with an attribution.

FWIW, I noticed that in my old C++ book (Stroustrup '91) there's some really bogus inheritance examples - window -> window_w_banner -> window_w_menu -> window_w_banner_and_menu etc etc etc.

The idea is presumably older, but I think the current usage can almost always be traced back to _Design Patterns_ (GoF, 1995). It's in the Introduction.

I don't think Stroustrup really gets OO. I have yet to read a book of his that doesn't have some absolute howlers.

> Anybody have an original source for "prefer composition over inheritance"

It is mentioned in the GoF Patterns book[1], from 1994/1995. that's what I think of when I hear the phrase.

1: http://en.wikipedia.org/wiki/Design_Patterns

"Hey, guys. Here's a really poorly considered hierarchy of classes, where I haven't really made any real effort to separate concerns or otherwise prepare for the problems which I am specifically creating.

Now, look at all these bad things that I've done! NEVER DO ANYTHING SIMILAR EVER."

That pretty much sums it up.

Yet another post titled "don't do X", which actually reads as "do not exaggerate with X" or "did you know? X has some caveats".

Of course inheritance is not a solution for everything. Avoid using deep inheritance hierarchies, you'll paint yourself into a corner. It's advisable to prefer composition over inheritance. You should not be that guy who only has a hammer and everything looks like a nail to him. But it doesn't translate into: "hammer? don't do that!"

All the examples he's giving either demonstrate abusing the inheritance concept or just show that it has certain limitations. There are well known solutions and guidelines for dealing with problems such as fragile base class, other than throwing the whole paradigm out of the window

It's difficult to argue with this article. I can count on one hand the number of times non-trivial class hierarchies (that is, more more than 2-3 classes) made my life easer and every time that was the case I writing a compiler or a code generator or something similar.

The Magnitude/Number hierarchy in Smalltalk is amazing.

> It turns out that our knowledge of the behaviour of non-trivial domains (like zoology or banking) does not classify into a nice tree, it forms a directed acyclic graph. Or if we are to stay in the metaphor, it’s a thicket.

That completely chimes with my experience. I wondered if I was doing OOP wrong, as any time the size/complexity of a project (or module) gets above a certain level, a Thicket results in my code.

> Classes are the wrong semantic model, and the wisdom of fifty years of experience with them is that there are better ways to compose programs.

Where's your source/links? What? This needs expanding - if there are better ways, outline your evidence and show us where we're going wrong :)

That was sometime I want to ask too.

Probably is the go OO-model?

yeah. what's a better semantic model?

This is generally true. On the other hand, interfaces are a good thing, especially in a language like Go where you can explicitly declare them and then pass in anything that happens to match that interface. (In JavaScript, this is unfortunately implicit, so it requires better documentation and sometimes runtime checks to ensure sanity.)

If you implement an interface using delegation, you get something quite like subclassing, except that the subclass-superclass interface is explicit and the superclass's type is entirely hidden. Sadly, few languages make this easy so it often requires some boilerplate or meta-programming.

Delegation can have a performance cost that inheritance doesn't have however, at least, depending on the compiler. Go's current compiler seems engineered for correctness, not performance, at least last time I checked, the JVM was beating it on several benchmarks.

Go seems to be headed towards pretty fast and predictable performance rather than the fastest possible performance at the expense of predictability. (For example, the work to replace segmented stacks with contiguous stacks improves performance in some cases but also makes function call performance more predictable.)

A JVM can have fast or slow performance depending on JVM flags, warmup time, and the phase of the moon.

Not really bashing Go, but I was was disappointed that for what is purportedly a 'systems programming language', performance was far behind C/C++/D/Mars/etc. I expect a little more from a statically ahead-of-time compiled language.

For 80% of apps, that's just fine, but would you write a mobile web browser in Go, or say, Call of Duty? Seems you'd be leaving 50% performance on the table in some scenarios.

I tend to give 'systems programming language' special status. They're how you achieve maximum system performance if not writing in assembly. You write kernels, device drivers, virtual machines, and games in them. If they are imposing severe performance or memory overhead, then they are application programming languages, not systems programming languages.

I think the messaging around Go has changed a bit since it was originally introduced. If you look at the front page of golang.org, it doesn't talk about Go as a systems programming language. I do remember the way it was pitched initially, but I agree with what you're saying and think that even the Go people themselves wouldn't lump Go in with C for things like kernels or leading edge games.

Depending on the language, delegation can be faster. For instance in C++, virtual function dispatch is slower than calling a non-virtual member. It is easier for the compiler to optimize member function calls vs virtual functions which cannot be inlined.

If the compiler does global optimizations, it can infer when virtual functions aren't really virtual. For example, if a virtual function is never overridden, or never overridden by any live type (after dead code removal), then it can be implicitly converted to a non-virtual function.

Since C++ doesn't have interfaces, only pure virtual functions, you'll pay the cost of the virtual dispatch, even if you use delegation, won't you? (The context of my comment is the use of interfaces, with implementations using delegation instead to do implementation inheritance)


interface I

class Concrete implements I

  delegate to Shared
class Concrete2 implements I

  delegate to Shared

Yes, if you don't use any interfaces at all, and just a concrete class, then it's a different situation.

I like these articles, but I think there are more than enough of them out there and not enough of those that show you how to use composition.

Everyone who has spent enough time with OO and classical inheritance knows the problems, but most have never seen how to convert a mess of classes and inheritance into a more functional approach with composition.

Don't get me wrong, there was good content here, but I was hoping that after the conclusion, the post was going to go into how to use composition as an alternative.

This reminds me of a design question I recently faced when building a simple multi-player game. The idea was pretty standard; The server sent events (player dies, player moves to pos, etc.) to its clients over a TCP socket. When programming the client in Java (for Android) I wanted a clean way to update the world based on the event type. In Haskell I would have done something like

data Event = PlayerDied Player Reason | PlayerMove Player Coords | ...

and use pattern matching on the event type. In C I would have used a combination of unions and structs with an event type. But how to do that in Java? I ended up with

interface Event { public update(Game g); }

and used e.g.

class DeathEvent implements Event {

  DeathEvent(String player, String reason) { ... }

  update(Game g) {
    g.killPlayer(player, reason);

Combined with a parsing function (public Event parse(String line) {...}) I could read from the socket and update the game in a convenient way, but to be fair I used that mostly because Java guys discourage you to use instanceof although it seemed clearer.

So is this the preferred way to do something like this? I think "they" (the OOP warriors) call this the Visitor pattern. However I really find the data-type encapsulation in Haskell and other languages (in Python a tuple (type, object) would do) superior. But maybe my Java just got rusty.

In javascript*

I think this highlights a problem with javascript more than OOP: We're trying to fit it into uses cases that are simply too complex for its design. It wasn't made to build your goddamn bank accounting system, it was made so that "nonprofessional programmers" could animate things on websites.

But it also highlights a problem it doesn't take about: Language in computer science and how it affects how we think about things.

In this instance, public and private are terrible names. They had to tell us in programming classes that they're not related to security, which means that the privacy metaphor is a terrible idea because it's not instinctual.

This in turn causes us to shoehorn class design into things they perhaps shouldn't be. At this point, for complex programs we should be describing things in a much more complex way than "accessible from the outside or not". As the article points out, it doesn't matter that the classes are external, because you can just as well break things from the inside.

This is a complex problem, but I think the beginning of a solution is to a) Depend on meta-information (or better implement a flexible, non-arbitrary "access" structure) and b) Use the right tool for the right job, in this case not JS.

Class hierarchies are just mental tools, not how machines/programs/automatas actually work. If the mental tools implode due to exceptions and complexities, that's the problem of their uses, not tools by themselves. Before blaming hierarchies, you should blame yourself for using tools in wrong ways.

Isn't he mostly complaining about lack of encapsulation? So what if you do encapsulate your data? That's totally possible in most programming languages, and it's even possible in javascript if you drop prototype inheritance. You can use closures to encapsulate your data.

This is interesting, reading some of the linked documents, my naive generalization is that it pushes the entities into using things like maps and dictionaries instead of static properties. The "system manager" stuff just pulls things out of the maps and feeds them to functions to do work, so it is very much 'data-driven' and I must assume more work is put into "configuration" of the entities, just like a data-driven business process requires "configuration" of the order processing pipeline. http://www.richardlord.net/blog/what-is-an-entity-framework

Wonder if this implies you know what you are going to be doing in the future. As said by Sandi Metz, the true value good design is to reduce the risks of future change. I think it is a valid endeavor to study what reduces risk and what increases risk. This doesn't have to be about "design patterns".

Something I haven't seen mentioned in this discussion so far is the Data, Context, Interaction (DCI) pattern.


As mentioned at points in this thread, some objects need different behavior based on their context. DCI is a thought provoking way to represent that (though one that not in common use and that is often described as "being done wrong" on the object composition mailing list[1]).

[1]: https://groups.google.com/forum/#!forum/object-composition

Cocoa is good because, among other things, is treads lightly on class hiearchies.

Personally I believe that software design is a highly subjective art. People that are meant to maintain a piece of code are different. Some may be more creative and less pragmatic, others may be more structured and clean in their design.

The truth is that there's no silver bullet. You can build software based on hierarchy of classes because your mind works better with that structure, but others may find it completely inappropriate. We're human and our mind works differently from one another.

In that sense I truly believe software is much closer to art than engineering.

This approach to discussing inheritance treats JavaScript as if you're a Java or C++ programmer, and completely lacks any clarity with how inheritance in JavaScript really works.

Best intro I've read on the topic I'm talking about is from Alex Sexton (just read the last example, it really hits the nail on the head):


So you might want a wee bit of hierarchy, if you're thinking like those "shared options" scenario, but not in the "OO type abstraction tree".

So, you might have something like a "Account.prototype.primeInterestRate" property that you can change in a running program, and then all the other types of account can calculate interest based on that shared property.

However, the more experienced jS developers I've met might take those "Account.prototype.balance" and "Account.prototype.deposit" methods, and push those into a "mixin" type (like "CurrentBalance") where those methods are copied (not inherited) onto the child class prototype, and those methods, might have initializer helpers to set up the "currentBalance" property they use. This mixin approach only gets gnarly if there's any feature envy. (Document your object properties clearly, folks. This is where javaScript's flexibility often becomes a crutch - lots of issues happen if mixin code uses "this.foo" for different things.)

Anyhow, what's interesting here is that Account carries the property that's shared, but CurrentBalance carries "behaviors", and is not shared, and your "child classes" like VisaDebitAccount embed both in different ways. It is a very different way of thinking about object relationships, and often works smoothly.

But if you're using classes in JavaScript like you would Java, well, then, you're not really using JavaScript, right? And, this whole talk about biological-style ontology just becomes the wrong metaphor, because while "humans are a primate" we can't change aspects of primates to add behavior to people!

That inheritance doesn't get encapsulated was the same way I felt about modules in ruby. You could include them into a class or other modules, but the interface was never well defined, and you could cause incompatibilities by relying on the implementation of the class you include into, unless you're disciplined enough to only use methods, and not attributes.

On the other hand, defining all the interfaces all the time, like in java was painstaking. I hope for some sort of middle ground.

The most challenging aspect of inheritance modelling is to recognize it on its intrinsics - the deliberate creation of coupled properties and behaviors associated with the "isa" declaration - and not simply as a form of taxonomy.

Programmers are still inclined to use hierarchies. They are implicit to source code in a more general sense, with indentation, nested logic, etc. But we also need affordances to break up hierarchies and keep them limited.

  ChequingAccount.prototype.process = function (cheque) {
    this.setBalance(this.getBalance() - cheque.amount());
    return this;

From a practical point of view, mixing in behavior and implementing via encapsulation is usually easier to maintain and extend than inheritance.

Another battle in the 60 year language wars based on criteria that are so obscure that have battles just over the darned criteria.

My js style began with a heavy usage of classes. I then left that for prototype composition. Recently, I've arrived at a style influenced by Haskell's separation of functions and data - namespaced objects of functions that act on plain json-serializable data. Flexible, simple, and perfomant. I don't miss 'this' at all.

> That kind of ontology is useful for writing requirements, use cases, tests, and so on. But that doesn’t mean that it’s useful for writing code the code that implements bank accounts.

Isn't good, easy readable code look very similar to requirements it was written upon?

It's pretty much a promotion/defense for SmallTalk. Why everybody has grudges against JavaScript?

Class hierarchies are OK when they have one level. It's essentially object oriented switch statatement with bells and whistles.

I can't remember one class hierarchy more than one level deep that was worth it.

They save a little code - true, but at the cost of coupling, making it harder to change, and forcing early debatable decisions on programmer (which classification is more important and goes first for example).

"Object Oriented design is the Roman Numerals of Computing" - Rob Pike

For more quotes see :


Another, seeing as it's PG :

"The phrase 'object-oriented' means a lot of things. Half are obvious, and the other half are mistakes." — Paul Graham

While I admire both Pike and Graham, I've never liked the use of zingers to articulate a programming point. Invariably, programming involves trade-offs, acceptance of limitations in one dimension to gain benefits in another. So, zinging the limitations is easy to do and doesn't advance understanding.

In idiomatic Python, the only reason for class inheritance is for code reuse.

In other languages, especially Java, class hierarchies are a matter of self-esteem.

Javascript programmers could do worse than emulate Python in this regard.


Please can yo make the type in your blog even less readable? I think knocking back the grey so that it matches the background should do it - you're almost there just needs a tad more.

This sort of sarcastic sniping is one thing we really don't want on Hacker News. We don't ban people for it, but it should be downvoted.

I don't mean to pick on vermooten here; lots of HNers post comments like this. Please don't post comments like this. Re-read what you post and, if it contains sarcastic sniping, edit it out.

I'll be pointing out examples of what's good and what's bad on HN, in the hope that the feedback will be helpful to the community. When I do that, I hope everyone understands that it's never personal, only about the content and only for trying to make HN better.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact