""
Patterns. Can’t live with ’em, can’t live without ’em.
Every single design pattern makes your design more complicated.
Visitors. I rest my case.
""
A design pattern demonstrates a solution to a common programming problem! How can you take issue with them? You don't need to program in an OO language to use the concepts that design patterns describe.
The whole article just seems to be anecdotal and full of straw men. If it was written in jest then the humor was lost on me.
As Peter Norvig famously said, "Design patterns are bug reports against your programming language." Read his presentation on why the most common design patterns become simpler or even unnecessary when using a dynamic functional language such as Lisp: http://norvig.com/design-patterns/
The problem with the term "pattern" is that it has 2 different meanings:
1) a guide for how to build/create something.
2) a describable and characteristic arrangement.
Both can be applied to "design patterns". ie, I'm going to solve a code reuse problem, should I use the strategy or template pattern. Or I'm going to solve this code reuse problem in a novel way in my inheritance based language. I know I'll make an abstract base class that does the reusable stuff and uses some abstract methods to fill in the changing stuff.
So really when people say that "design patterns" are bad, what they really mean are 1) the GoF patterns are only necessary because your language is flawed or 2) people are picking the wrong patterns to solve problems.
But all languages and software systems have patterns, they just might not be cataloged in GoF.
Whether a design pattern is implicit or explicit is a matter of preference. Personally, I prefer to make my design patterns explicit even when working in languages that don't require that convention because it makes my code easier to grok for others.
Edit: Thomas, I can't reply since this has been flagkilled, but for you or I an implicit pattern is superior. That said, many business programmers don't come from a functional background and for those programmers I prefer to make my code easier to read.
Can you give a couple examples of pairs of patterns and native language implementations where the explicit pattern invocation is clearer and easier to grok? Norvig's simplest example is "factories" in languages where classes are already first-class and can be passed around without extra machinery. Clearly, Norvig is right about that one.
The one I think of off hand is the Registry Pattern. A Registry is a collection of objects or classes that need to be globally accessible. You can't just attach state anywhere in Java, so you create Registry classes to hold all of them.
But in Ruby, you can define instance variables on class objects, so that's where I put my registries. Let's say I have a custom logging class, named Logger, and I have three different logs that need to be accessed. What I would do is define self.[] on Logger, that accesses @loggers on the Logger class itself. So whenever I need a file logger, I just call Logger[:file] rather than LoggerRegistry.get_logger(:file)
Note that an instance variable on a class is different from a class variable. A class variable in Ruby is shared across the all the descendants of the class, and up the chain too to a point. (I don't think they go all the way up to Object but I'm not too clear on the details there.) In practice class variables are rarely used.
For the Factory pattern, the usual way to do it is to define a .create or .build method on the superclass, and have it return or define the needed class. If it were me, I'd use .create to make objects, and .build to make classes.
A Registry is a collection of objects or classes that need to be globally accessible.
This is a variation on the "Singleton Pattern" and IMO it is an antipattern. It tightly couples your classes to a "global state" class which makes unit testing very difficult. This is why Dependency Injection was invented. And in Lisp-like languages DI is totally trivial due to having first-class functions and closures.
Well, the coupling is there whether you're using DI or a Registry, it's just a question of where the code expects its dependency to be. In DI, the dependency is passed in as part of the sets of arguments, in a Registry, it expects it to be somewhere else.
What I like about using a class object to hold the state is that the class is being depended on anyway, so you're not introducing any 'extra' coupling. I also like to allow the dependency to be passed in as an argument, with the default being to grab it out the registry, for testing purposes or in case I'm hooking into the code with a REPL for whatever reason. This way you get the benefits of DI without littering your classes with keyword arguments that, 99% of the time are only going to interact with one thing.
I agree. Let's imagine writing a compiler without the visitor pattern. In fact, hey, let's go a step further and build one in a functional language. I'd be willing to put money down that the strategy that would evolve for say, code generation, would look a hell of a lot like - wait for it - the visitor pattern.
To your point, OO & design patterns aren't necessarily intertwined.
The double-indirection of the visitor pattern is unnecessary in a language with pattern matching. I mean sure, it's just syntax, just sugar - but readability is important. As the saying goes, a design pattern is a structured way of working around a language deficiency.
From what they taught me at software engineering 101, double dispatch is exactly the point of the visitor pattern. If your instance of Visitor lacks double dispatch, that means it's just a very complicated way of performing external iteration on Composites.
Quite. It should be obvious how to implement it using std::function and lambdas in C++, for example. Having a bunch of polymorphic classes is an ugly implementation detail, not the essence of the pattern.
Actually, the visitor pattern (in Java) is significantly harder to use than pattern matching. Visitors break variable scoping. You have to manually add a lot of classes and instance variables to get the equivalent behavior of a pattern matching tree. Then you have the pain of having to jump around N classes to track what the code is doing.
You are better off by just adding an enum to each node and switch + cast.
> A design pattern demonstrates a solution to a common programming problem!
That's true, but the recipe style presentation and implementation of design patterns demonstrates a common problem with programming languages -- lacking sufficient expressiveness to implement a general solution as reusable library code.
However, there is nothing really special about OOP here except that the industrial popular languages at the time when documenting "design patterns" (using that name), including implementation recipes, as a solution for recurring problems to which library code wasn't an adequate solution happen to be statically-typed, class-oriented, OO languages.
Somewhat similar things existed prior to OO dominance, though the adoption of the "design pattern" term and some of the structure of typical description from architecture hadn't happened yet.
For the most part, yeah—hence why it was given as a keynote an an OOP conference.
Or to look at it another way—this was a critique from a practitioner of OOP, speaking to other practitioners of OOP, not a critique from someone outside looking in.
I'm not a huge fan of OOP either, but I'm rather much getting tired of these articles who rant on about it from a logically unsound position like my grandfather rants about Obama.
OOP "sucks" because programmers suck. There are a lot of bad programmers in the world. SOme of them occasionally write programming languages, too. So just because certain people fail to create quality projects, or one particular language, written in ancient times, doesn't stand up to scrutiny, doesn't mean the entire idea is flawed.
Modern object-oriented programming (whatever you want to call it, I'm not going to get into any No True Scotsman debates) is a very well-understood paradigm. If you have a set of experienced, skilled developers and nobody is going off trying to do insane things, you can implement many projects with an OO design and it will be very successful.
Not every project is a Boeing 747 or the Sistine Chapel or Dante's Inferno. Most projects are just wood sheds and latrines and gas stations. In terms of grunt-work programming where the problem domain is well understood, OO works just fine.
And FP works just fine. And straight procedural works just fine. And it really doesn't matter. Because the quality of software has nothing to do with your choice of paradigm or programming language or operating system or anything. It is the team that matters. That's it.
And if you're embarking on a massive air liner or an epic poem, your basic choices of tools are once again not going to matter that much, either. It is again up to having the right people. In this case, you need brilliant people, and not just people who struggle to put together a birdhouse without supervision.
Joe Armstrong is quite fond of calling Erlang the only true OO language in its original sense. The idea of methods being messages sent between isolated and opaque objects. This isolation is really enforced at the heap level and method call are really messages being send to an actor's mailbox. So it maps very well to the original concept.
OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I'm not aware of them.
Alan Kay's restrictive definition of OOP is a lot easier to apply than the buzzword-list definitions. It clearly rules out a lot of languages.
C++ 'OOP', by comparison, is little more than syntax sugar for calling functions that take a struct pointer as their first parameter; a feature to make the compiler forcefully limit the access scope of certain members; and a v-table implementation.
I really like Go's OOP implementation because it doesn't pretend to be anything more than the 3 features listed above.
> This isolation is really enforced at the heap level
This is a very good point. True messaging OOP with "extreme late-binding" requires by definition that no object knows the memory layout of any other object. In practice, we achieve this by putting everything on a heap.
That characteristic is against C++'s goals. Heap isolation is useful for implementing certain features but it is not acceptable to apply it to the entire program when runtime performance is important.
But if you don't require the pattern everywhere, it really cuts down on the reusability argument for OOP, because inevitably you will want to change a part of your program that doesn't use the heap isolation.
It's also hard to reconcile Kay's definition of OOP with C++'s strict ownership semantics. It seems like Kay's OOP requires a garbage collector almost by definition.
C++'s template system is almost like the dual of Kay's OOP.
Instead of messaging, every function call is a candidate for inlining. State might be hidden by the `private` keyword, but all memory is accessed directly. All things are bound at compile time.
He's all over the place, and as other commenters have mentioned some of it is a little ridiculous. However, there are a few great kernels.
I am guessing that one of the reasons that Test-Driven Development is popular is that it also exposes object interactions during development.
This is exactly right. The reason TDD is useful goes right back to the fact that Well Factored Programs Cannot be Understood Statically [0].
Type systems cannot deal well with the fact that programs change, and that different bits of complex systems may not be consistent.
Oh my this is exactly right. Most of my work is doing integrations with a variety of APIs. I try to do as much as possible in JavaScript because it's dynamic nature papers over the major issues in nearly every "enterprise" API. I'm very excited about the prospect of pluggable types [1] that he mentions.
I challenge you, however, to name a single programming language mechanism that supports change.
The first I've ever seen was just yesterday [2], but even that is just a tool attached to a language ecosystem, not a part of the ecosystem itself.
I like to think that one day, when we grow up, perhaps we will think of the model as being the source code.
I think this is already the case in many circles. For instance, domain-driven design makes it a priority.
...we have difficulty distinguishing real and accidental complexity.
This might be the most insightful statement in the article. I'd be intersted in readin gmore about this and how to mitigate the problems.
If you're looking for a focused criticism of OOP, you should look elsewhere. Some points in the article deal with programming in general, sometimes very vaguely, like the single sentence criticism of design patterns, methodologies, on change, on types...
For me the point of OOP is that it isn’t a paradigm like procedural, logic or functional programming. Instead, OOP says “for every problem you should design your own paradigm”. In other words, the OO paradigm really is: Programming is Modeling
Bingo. The biggest problem I deal with in an OO language is people (especially non-programmers like business analysts) trying to "model the world" with class diagrams and inheritance trees. It only leads to endless debates about whether something is-a something else, or has-a something else, and it takes a lot of time away from implementing the solution to the actual problem.
* Its nonsense to blame OO for the "my programming language is superior to yours". It is a natural trait. Ford vs Chevy, Hockey Team A vs Hockey Team B, CPU A vs CPU B. Additionally this sort of comparison is what the entire article is.
* Whatever type of programming language you use you will do modelling. You might do the modelling differently but you still have to do it.
* Stating that procedural code is just easier to read might be an indication that the author has spent most of his time reading this type of code. Talented programmers will make their code easy to comprehend no matter what is in their toolbox.
To me, messaging is the most important part of "OOP", and "OOP" should have been Message Oriented Programming. Calling it object oriented programming and having classes and inheritance made all of academia excited because you got to name and classify things. Object reuse was going to save the world, etc.
Message oriented programming - that you need some kind of message protocol between two things to communicate is something we do a lot of, and has built huge systems like The Internet and all of our web services and JSON API's and all that jazz.
The beauty of MOP is that once you have a format between two things, you can simply code to the format. We do this all the time with Web API's. The entire paradigm of microservices that is gaining popularity is really close to what Alan Kay was talking about when he created OOP. The biggest difference is microservices add network overhead complexity.
If we built software as a series of Objects/Services/Containers that sent messages to each other over an enforced protocol, many of the OOP complaints go away because what is important is NOT the object, it's the communication protocol between objects that drives value.
We have completely inverted the paradigm and lost a lot of value along the way.
You don't need a lot to do message oriented programming. You need request/response formats, you need to enforce those formats, and you need a sender and receiver that do things on either end of the request-response.
Erlang and the Actor pattern seems to be one approach to solving problems using this technique, microservices is another.
I think there is a third option where you have contained objects/services/whatever running in the same program sending simple data structures like hashes as messages back and forth to each other. It would work well as long as you have code to enforce the message protocol between things(right data format, nil checks, etc).
That approach could be done in OO languages like Ruby, Java, C++, PHP, Python, etc. OR it could be done in FP languages like Clojure, LISP, etc.
It's not complicated, it's just applying the request-response pattern to connect objects/processes/code together in a sane way. Sort of like UNIX pipes on the command line. Sort of like HTTP. Sort of like a lot of things we use, but don't seem to think would be good patterns for code.
I have seen so much of these posts that I am now only willing to take opinions on languages from people who do actual research / develop a language that is popular. Anyways I think this is an opinionated post that one can choose to reject if one needs to. The article is very vague at the least.
Well, Professor Nierstrasz has been doing language research since the early 1980s with a particular focus on object-oriented and concurrent OO paradigms, so he definitely fits your requirements. This should be taken for what it is -- a breezy, somewhat exasperated tour of where OO hasn't lived up to its (often rather handwavy) promises, from a researcher who has spent his career trying to improve the theoretical and practical state of the art.
> I hate having to say everything twice. Types force me to do that
Thats just because most OO languages OP uses don't have powerful type inference. Sure, type inference is tricky in languages with subtyping but it certainly doesn't need to be verbose and redundant like it is in Java.
Given that the compiler knows that myMethod cannot return anything but a string, there really should be no need to state it explicitly - it's redundant information. Likewise, myVal can only be of the same type as myMethod's return type.
Languages with type inference get rid of (most of) that redundancy. Take Scala, for example:
def myMethod() = "hello, world"
val myVal = myMethod()
Slightly OT, but can someone explain the opening quote: “Object-oriented programming is an exceptionally bad idea which could only have originated in California.”
I'd always thought the Object-oriented programming originated in Norway. Is there some pre Simula language I'm unaware of?
The complexity of a program has to go somewhere. Everybody's got different ideas about where it should go.
Old-school Unix guys would prefer that you divide it all up into lots of differently-focused languages, tools, and such based on the ethic of 'doing one thing, well'. Which is fine and all, but it forces everybody new to learn a hundred little tools and languages to make anything significant.
The great thing about using a language/framework is that all your tools can share the same semantics. With OOP, you can model all sorts of interactions between domain concepts and do it in an iterative fashion.
But most OOP languages are insufficiently powerful, as the article says. It's difficult to move complexity around because OOP languages tend to not be dynamic enough to really be fluid like you need.
If you ask me why I think everyone hates OOP, it's because they haven't really understood Ruby's object model.
Ruby makes most everything an object that needs to be an object. A primitive is just a syntax for creating a built-in object. All of them ultimately inherit from BasicObject, which is separate from Object because it allows you to create entirely new hierarchies.
A block is not an object, but it can be turned into a Proc, which is.
All of this fluidity makes it so you can push around the complexity of your application to wherever you feel it needs to be. The ability to see exactly how complex your OOP programs is is a feature, not a bug. How can you remove duplication if you can't see it? How can you see the dividing lines of your emerging domain model if there's no way to make your code model anything, if it's obsessed with data rather than relationships between abstract concepts?
With Ruby, you can write the long complex methods, then gradually DRY them up into a bunch of methods, then collect those methods into classes, slowly deciding what needs to be class behavior and what needs to be object behavior, then meta-program the creation of those classes and objects with a DSL/front-end, then finally turn the whole thing into a tool you use declaratively which generates whole new concepts on the fly.
It works the way the human mind works, we take concepts and model them and their relationships in our heads and try to reason about edge cases. Once it's all well-defined, you can just add new cases as data rather than as code. Your business users think in terms of classes and objects, your boss thinks in those terms.
If you don't have a domain model you're at least trying to work out, then it will be that much more difficult to get them involved in the process. You want them involved because otherwise you have to understand the domain yourself. Your boss will have to come to you whenever the promo generator isn't quite working because he won't really understand the semantics behind it.
If you have an object model you're hacking on, you can just hand it to him and be like, "OK point to where you think the problem is". But if it's just a bunch of functions focused on data, that might make it easier for you to trace the data through your pipelines and find the bug, but you're going to spend a lot of time translating between the domain model your boss and business users are already and informally using anyway to understand the application and the way you understand that it's actually working. They're already using a domain model, so it would behoove you to create a canonical model and get it as reflective as possible of the business needs.
You don't use OOP because it makes it easier for you to understand. It doesn't. Clarifying a domain model is hard work. You do it because that complexity has to live somewhere, it should be the domain model, because other people besides programmers are involved in the system too.
Hate? You really exert the time and effort required to hate something, towards hating object oriented programming? Surely there are better uses for your time.
hoo boy, this article sounds like it was written by somebody who only had a first-year course in OOP. I hope it's mostly tongue-in-cheek, but in case it's not, let's quickly go through it:
1. Paradigm: The paradigm of Object Oriented Programming follows directly from its definition of an object: a structure that contains both data and behaviour. Objects are your first class citizens. "Everything is an object" is, for most, a goal. Everything else, all the variations on how to formulate it or how the language implements it, are interpretations of how to do that.
2. Object-Oriented Programming Languages: I agree, everybody has their axe to grind with another language. This is no different from other programmers. On the flip side, for most this axe-grinding is pretty light-hearted. In a world as big as the OOP one, rivalries are everywhere.
3. Classes: "There is a complete disconnect in OOP between the source code and the runtime entities". Because the runtime entities contain data. Classes determine the behaviour on that data. It's a direct consequence of nr. 1. OO Code shows us the behaviour of an object, interacting with the behaviour of other objects.
Everybody is free to dislike it, but calling it 'nonsensical' is ridiculous.
As an aside: there are IDEs that show objects. BlueJ (https://en.wikipedia.org/wiki/BlueJ) comes to mind. They tend to be educational tools, because seeing and inspecting actual objects during development means you still need to work on how you mentally model the code.
4. Methods: Methods should be a sensible length. Make them as short and as sweet as is possible, but no shorter. If you need to go bouncing around huge amounts of tiny methods to figure out what is going on, you have a code smell. If you have to search through a 1000-line monster of a method to figure out what is going on, you have a code smell too. The goal is to find a sweet spot, not to apply some rules to their extreme.
5. Types: I'll be short on this one: the point of types is to make sure you don't accidentally do something stupid. In the process, they make other things harder, but also more predictable. If you don't like that, no problem, there are OO languages that are dynamically typed, or ducktyped, or any other variation on the theme.
6. Change: I don't quite know what the author means with change here. He seems to mix up change of the program (maintenance, extension, etc.), with change in the program's environment.
7. Design Patterns: A standard way to solve a standard problem. Refusing to use them or not knowing them will lead to either bugs or reinventing them. If they 'make your design more complicated', you're using them wrong.
8. Methodologies: Are a general problem in all programming, and always have been.
9. UML: I'm a professional developer. I almost exclusively do OOP. I haven't touched UML in years. I haven't seen UML in years, not from other developers, not from architects, not generated by the code. The code is the code, the modelling language is a way to communicate about the structure of the code. Reversing that relationship just means you're hiding code behind a graphical representation. See also: BlueJ.
10. The Next New Thing: Happens everywhere.
The conclusion sounds like a complete inversion of the original premise of the article. I agree we have a long way to go in OOP, and I agree we are still in the early days of OOP. But hating OOP is as shortsighted as hating FP, or procedural programming, or any other paradigm.
This was a speech at an OOP conference. The a author is a computer science professor with a couple hundred publications. This is an entertaining rant from someone speaking informally about a subject they know very well, not just a first year course in OOP.
This article is terrible. There's no insight here whatsoever, he even fails to understand one of the quotes that he cites.
> “There are only two kinds of languages: the ones people complain about and the ones nobody uses.” — Bjarne Stroustrup
This quote is about how everybody complains about the languages everybody uses, and nobody complains about the ones nobody uses because nobody uses them.
The underlying fact is that all tools are flawed. Familiarity breeds contempt. If you don't see the glaring design problems with your language of choice, it probably means you haven't used it enough yet. Or you may just be blinded by fanaticism.
A good developer chooses the existing tool that is best for the job, or creates a tool if one is needed. She does not waste energy complaining about the available tools just to complain.
"" Patterns. Can’t live with ’em, can’t live without ’em.
""A design pattern demonstrates a solution to a common programming problem! How can you take issue with them? You don't need to program in an OO language to use the concepts that design patterns describe.
The whole article just seems to be anecdotal and full of straw men. If it was written in jest then the humor was lost on me.