I'm always confused when people criticize OOP, because most of the time the criticisms they use are just plain bad programming. Or they're using hyperbole that isn't really useful.
You want to know why OOP is useful and common? Because it's easy. Easy things allow for faster development, faster onboarding of devs. Humans need a mental models of things and OOP very explicitly gives it.
But like most easy things, OOP still works well enough even when it's done badly. That's not a bad thing. Working software beats non-working software.
We shouldn't be telling people "stop using OOP! It's only for idiots!". OOP will be here forever. We should be teaching people how to do OOP right, what the pitfalls are that lead to bad design, and how to take badly designed OOP and fix it.
> I'm always confused when people criticize OOP, because most of the time the criticisms they use are just plain bad programming
Every paradigm can implement any type of program needed, in principle. The point of criticising a paradigm is to argue to what extent that paradigm encourages that bad programming you mention. If OOP naturally frames problems in a way that leads most people to bad solutions for common problems, that's a legitimate criticism.
Being the overwhelmingly dominant paradigm of the nineties and early oughts, largely because it was better for large scale programming than simple procedural and got bolted on in a very backwards compatible way to the overwhelmingly dominant procedural language of the immediately preceding period doesn't mean it is going to be around, in a significant way, forever.
> You want to know why OOP is useful and common?
Path dependence, its superiority for large products over what was dominant just before, the dominance of C before OOP became common, and the near simultaneous introduction of C++ and Objective-C which framed the debate about how to move beyond C (which C++ won, which is why not only is OOP dominant, but static class-based OOP on the C++ model, both strucurally and syntactically, is dominant within OOP.)
Yes, OOP is easy. At least the layers of sedimentary cruft were comparatively easy to write. The parts that actually do something, not easier in the slightest. It's still procedural code that must move data. (Usually that code is much less straightforward in OOP because it has to deal with so much boilerplate that is in the way of accessing the actual data).
This comes up a lot, and I think there's a broken assumption here. The core business logic of a product "actually does something", to be sure. But that doesn't mean that all the other code is somehow less important or pointless. Code that allows multiple people to work on the same abstractions in slightly different ways "actually does something" too--it facilitates cooperation and interoperation. Test code "actually does something" too--it helps prevent issues and ensure quality.
All of those things can be done wrong (core algorithms can have bugs, abstractions/interfaces can confuse people, tests can lie/test the wrong things), but they're all equally necessary in many cases. I think the misconception about non-core-procedure code being "cruft" automatically comes about because very early on in small projects, you don't need much more than the core. But when projects grow (to more people or just more code), other goals, like standardization/maintainability/extensibility/testability, become just as important as the goals of the core: if your added engineering staff can't add value to the company, or if your software is fatally buggy, it doesn't matter how good the shape of the original core-ideas code is, you'll lose money.
I had similar thoughts. Before I started learning how to program Rust.
All your arguments ( standardisation, maintainability, extenseability, readability) are true, but they are not necessarily always brought to you by (ab-)using OOP.
I experienced that OOP can be at times something that can make people see everything as a nail, because all they have got is a hammer. In the end a lot of the real work a piece of code is the transformation of data. OOP often distracts inexperienced programmers from this, which leads to unnecessary complexity in bussiness logic and interfacing alike.
Thinking in the line of data transformation is useful because you will focus on two things: how to do this transformation fast/simple/extensible and what the most useful abstraction over said transformation is.
You can (and given some complexity, must) achieve all of the mentioned attributes without using classes at all. And sometimes taking classes away can make things so much better.
It's maybe confusing because you are presupposing OOP as the default. Humans tend to observe some situation and sythesize stories to explain it, even when it comes to observing their own actions.
I don't mean to unfairly paraphrase but to transpose what you wrote so it shows these stories you are telling yourself
- usually people criticising OOP are "bad programmers"
- OOP is easy, alternatives are hard
- Even if OOP model is a bad fit, it's still a model and the extra devs needed to solve its bad fit can easily be chained to the oars
- If OOP makes trouble, it was because it was "done badly"
- Most easy things work well enough even if "done badly" (that is just not true)
- If OOP can be cobbled together to work at all, it is justified since that justifies anything
If you consider a simpler proposition like, "this can be done in a few hundred or thousand lines of C99 instead of latest C++ and boost", I think there is just no valid reason to bring OOP into it.
Even when I was writing C, I had different modules in different files, only exposed certain functions via the header files and had structs with function pointers that did different things based on how the struct was initialized.
This was before I knew anything about OOP. How is this any different than object oriented programming?
I think the main benefit of OOP isn't really abstraction, like modules, because even function signatures coupled to a struct definition can be a great abstraction. Abstraction is about generalizing something in order to hide its inner workings. Modules can do that just fine.
What OOP "got right" was to make it easy to use indirection to pass function pointers. That way, you can easily create pluggable systems at runtime, so that strategies can be dynamically chosen. Passing function pointers in c can be pretty tedious, if you need to send implicit state with it.
I've started to get really annoyed with indirection though. It's getting abused so thoroughly in order to make bad designs testable through dependency injection. I find that it is much easier to use dependency rejection instead.
> You want to know why OOP is useful and common? Because it's easy.
With as much proof as you have given, let me offer you a counterargument: it is common because academia loves OOP. It's easy to teach, it's easy to test. It is most decidedly not easy and very often not useful.
My favorite example of how everything falls apart in due time is the color of a car. That's it, right? A car has a color. A Porsche Panamera might be a "carbon grey metallic" and it's stunning, but that's just one color still. Aye, up until the Mini Cooper tells you they need two colors. This world doesn't fit the OOP straightjacket. Your programming course does but the real world doesn't and when it doesn't then pain follows.
If I’m programming in a monolith, I change the Color property to a List and I get a big red dot with the number of errors the change caused in the bottom of my IDE and it tells me all of the places I need to change the code.
If my car class is part of a Nuget package. I change the version number to represent a breaking a change following the standard semver semantics. When the consumer upgrades to the latest package, they either get warnings about Color being obsolete and use Colors instead or I just completely remove the Color property and they get a red dot and change their code or decide they don’t have time and keep the old package around for awhile.
If I’m writing an API, and since I did properly separate my domain model from my view model, I both map the first item in the Colors list to the Color property in my view model and I create another Colors property.
You seem to be translating a modeling failure into an indictment of the whole object modeling paradigm. Your car model doesn’t account for multi-color liveries? Revise and refactor the model. Adapting to change isn’t the same as falling apart.
Maybe we're exposed to different evidence but it seems like academia heavily favors non-OOP such as functional programming. Programmers also make repeated citations to SICP class they enjoyed in college but were forced to deal with OOP when they got a real job. It's the commercial industry that pushed OOP. Universities seem very anti-OOP while commercial businesses like Adobe/Microsoft/Google use OOP languages like C++. Other companies that write back office "enterprisey" software also favor OOP languages like Java/C# over non-OOP such as Haskell. I also remember the 1990s when professional programming magazines had monthly articles evangelizing the new thing called "object oriented programming" and you had full-page ads for Turbo C++ and Microsoft C++ that touted its new OOP features.
>My favorite example of how everything falls apart in due time is the color of a car. That's it, right? A car has a color. A Porsche Panamera might be a "carbon grey metallic" and it's stunning, but that's just one color still. Aye, up until the Mini Cooper tells you they need two colors. This world doesn't fit the OOP straightjacket.
I don't understand why your example as you've constructed it proves what you want it to prove. If we use your "car color" as a text template to test the validity of other computer science ideas:
Database table paradigm:
create table car (
vehicle_id varchar(20),
color_code varchar(1)
);
... but a car like Mini Cooper can have more than one color. The world doesn't fit the relational table straitjacket.
... but a car like Mini Cooper can have more than one color. The world doesn't fit the algebraic data types straitjacket. The world also doesn't fit structs. It doesn't fit the char data type, or array of chars, and so on.
It seems like your example applies to any and all computer science concepts that attempt to model real-world data so maybe I'm missing something?
It's not OOP that's the problem, but your mental model. There's no OOP language that says a car has to have one color.
You can define an object to have more than one color, or maybe instead of having a color have a color pattern. How creative is your imagination? If you use a tuple of (r, g, b) to represent color in a functional language, you will still have the same problem when you realize you can have multiple colors or design patterns. if your mental model has a mismatch with the real world, you will have problem at some point.
Sometimes it's not primarily that it's impossible to model this stuff in OOP, but rather that the change in requirements over time can be more difficult to implement in an OOP codebase. At least that's been my experience.
Okay, so how would you model that in a different paradigm? And why could OOP not model that the same way?
Your example also doesn't specify a business context. What kind of software am I building such that I need to model a car and it's color? That simple question will entirely change how the OOP model is designed.
OOP is easy to teach because it's easy to use, to understand. But again, if you use simple poorly thought out OOP, it may work- just badly. And of course there's the problem of "those who can't do, teach" and often those teaching programming are poor programmers.
Why is color of a car an issue with OOP? Without knowing your the use cases, neither of us will model it correctly for the job at hand.
A car obviously has multiple colors. It has at least two: exterior and interior. Quite often there are two-tone colors. But as a manufacturer, I'm going to guess that each interior and exterior color scheme has an ID. If you are a car manufacturer, the interior and exterior color scheme ID would be all you need.
Although in fact you should probably allow for composite parts with multiple sub-colors, where each part supports its own enumerated list of possible colors.
In spite of appearances, it's not a completely trivial problem.
Maybe OP meant that to someone doing naive modelling, defining a car class and then giving it a single color property is going to cause problems. Which of course it is.
And maybe also that OOP encourages this kind of superficial thinking. Initial ignorance - and initial assumptions - about the problem domain get baked into the architecture. It becomes increasingly hard to change them as time passes and code grows around them.
Essentially, OOP mixes up data schema and software architecture in a brittle way. Of course you can build abstract classes for variable schemas, but then you're really doing meta-OOP, and there are probably better options.
OOP isn't the only paradigm that does this, but the brittleness seems to be characteristic. If you keep your schemas separate and explicit it's not usually all that difficult to extend/change them. If they're buried in class definitions and you don't have a dependency map to see which part of the schema is used in which part of the code, non-trivial refactoring can become a complete nightmare.
Absolutely not! I doubt that anybody was ever taught OOP in an academic PL course, unless it was really an "introduction to programming" course, or their professor was working on this topic at the moment. The meaning of a program in an object oriented language with imperative features and inheritance is not pretty.
There are aspects of object oriented programming that are useful for structuring programs (hidden state) and others which are a recipe for disaster (recursive types). This complexity is always swept under the carpet when teaching OOP. Classes/inheritance/objects are always taught via imperfect analogies, which should tell you everything you need to know about how "easy" OOP really is.
No, the reason it is taught so widely is purely practical. It's a popular paradigm and a lot of practical programming projects might need an OOP background.
---
Edit:
Just to be clear, I'm not trying to say that OOP is bad per se.
The information hiding and namespacing aspects of objects are really useful, both in theory and in practice. It's just that I think that implementation inheritance is an imperfect way of facilitating code reuse and not something you should teach to new students...
The information hiding and namespacing aspects of objects are really useful, both in theory and in practice. It's just that I think that implementation inheritance is an imperfect way of facilitating code reuse and not something you should teach to new students...
And most professionals agree with you. “Prefer aggregation over inheritance.”
Which by the way Resharper makes really easy. You add a private variable to your class of the type you want to aggregate and it creates wrapper methods in your aggregating class that just calls your other class.
It’s in SICP, leading up to the adventure game project. That dominated intro-PL conversations for a long time. Also, it’s OO with delegation-based inheritance, prototypes—and some discussion of why.
Sometimes there is just no right way when it comes to using OOP. Sure OOP is often a hammer that will make everything look like nails, but for some problems it really just isn’t the right approach at all.
I've been programming for 15 years (so I was around for the rise of OOP). There's nothing fundamentally wrong with OOP. I still write a lot of classes. But what I haven't written in a almost a decade: code with inheritance, encapsulation or polymorphism. Nothing has exploded.
Also, objects are best when used to represent actual objects (i.e. collections of data rather than collections of functions).
This is OK
User user = new User("John");
But this:
StringTokenizer st = new StringTokenizer(str, ",");
while (st.hasMoreElements()) {
System.out.println(st.nextElement());
}
Might not be as clean as:
stdout(tokenize(str, ",")); // functions are hypothetical
I was about to reply in C# that would usually be done with LINQ, but then I realized I kind of just proved your point since LINQ is functional programming not OOP....
> Also, objects are best when used to represent actual objects (i.e. collections of data rather than collections of functions).
Except these aren't objects, they're just records. I don't think it's a good idea to pollute the word "object" or OOP. You use objects specifically when you need procedural abstraction, otherwise you use other types of values, like products/records, sums/variants, etc.
At high system levels, where you have client/server or other types of protocols, objects fit. At lower levels other paradigms tend to be better.
The best arguments against OOP, as it is done in most languages today, are made by Alan Kay, the creator of Smalltalk and the term "object-oriented". Alan should have used the term "message passing" so people wouldn't miss this component of Smalltalk that made it so great.
> You want to know why OOP is useful and common? Because it's easy.
Try rewriting that without adjectives. Engineers are very convincible, but they need evidence. Personally, I don't find OOP to be particularly useful or easy.
Check this out... "I became a much better programmer when I started writing functional code. Now I tend to write OOP code in a functional way, but learning how to do that was and not easy, and I am not sure most would find it useful." By saying that, I'm basically taking a position that's the opposite of yours, and while it's true to me, it provides no evidence for you to help you evaluate my side of the conversation.
But this is often the result of the good property of OOP software. OOP software is easy to write, and allows for easily abstracting over problems.
The net result is OOP programs tend to be more complex. Not "more complex than equivalent software". More complex. But also more capable, more abstract, easier to extend, at least along foreseen lines, easier to change (although that's mostly a tooling thing I think).
But yes, bigger, more intricate software is more difficult to debug. Also abstractions don't help, because often the problem is that there are edge cases where the abstraction doesn't work. Also debugging with abstractions require you to know and understand what the abstraction does.
There are lots of cases when forcing yourself to stick to OOP styles or programming is much less easy. You could look at various comparisons of code line counts of equivalent programs between C# and F# code as some basic examples.
I don't mind having classes that can inherit available in a language, but languages like Java and C# that force you to use only those things leads to a lot of unecessary boilerplate. You can work around this with static classes full of static functions that you can treat as 'modules' full of free functions. Which, is what I do a lot :D
But I wish I didn't have to bother with that workaround.
I would suggest that boilerplate is small price to pay for having a compiler enforce design discipline and conventions. Remember that code should be written for both the run time environment and next coder who will have to understand it. Constraints enforced in java often make it easier for the next dev. I suspect part of this is that in unconstrained languages, there are so many ways of doing the same thing that pattern recognition become more difficult for all but the most experienced programmers.
> pattern recognition become more difficult for all but the most experienced programmers.
In fact the opposite is the case. The boilerplate obscures underlying patterns because it forces you to read twice as much to infer the same amount of information. Roughly the same design patterns are used in more expressive languages, just without all the boilerplate. This makes the pattern being employed completely obvious since it's only a few lines of code.
I'm sure, and yet it remains true that you spend a lot more time scanning and scrolling your wheel mouse than is necessary. You can also fit far less context on screen, making debugging and "learnability" harder.
There is overhead but with that overhead becomes maintainability, type safety, etc.
For me, on one side is Python. No compiler overhead, very little boilerplate, no complicated IDE etc but no type safety and harder to maintain for larger projects and C# with all of the overhead of a heavyweight compiler but the type safety and great IDE makes it perfect for larger projects where multiple people will be working on it simultaneously.
For simple scripting, where can I mentally keep everything in my head and think through how one change will affect everything else, Python is my go to language. For larger projects C#.
I’ve hated all scripting languages I’ve encountered over 20 years, but for some reason I love Python. It’s also a great teaching language.
I agree Python is good for teaching, but the maintainability and type safety aren't features of the boilerplate. This is clear when considering languages like OCaml, Haskell and other languages featuring type inference.
I learned Rust after learning Python. I do solve similar problems in Rust as I did in Python.
The single worst thing that python thought me, is that it would be cool/useful to make classes. Some call that a “sea of objects”. When I tried similar concepts in Rust I learn quickly that things started falling apart much faster at a point reached much earlier.
However this was also happening in Python – only much later in the game. This made me realize: instead of focusing on the flow of data and useful abstractions (”blackboxes”) over it, I actually made a lot of classes and objects and tried to pipe them into each other, without giving too much thought on the flow of data.
It felt so reassuring to make a Car-class for car objects, wit a Wheel class for its wheels, that I somehow forgot to think about the actual data that I wanted to work on.
The danger of OOP is IMO that it highly encourages bringing in preexisting patterns of abstraction that seem to make so much sense that you discard other, potentially more useful abstractions.
I have more broken Python code than I have broken Rust code, and one of the reasons is, that a lot of the stuff I did in Python (rightfully so) wouldn’t even compile in Rust.
To go with the above example of c# vs f#. I find f# far easier to understand because the compiler enforces a strict top down compilation order. Combined with the encouraged basic type system (records and DU) this enforces a very rigid design structure that is easy to understand.
Compared to that c# is a ball of wool where not only have you to follow all strands but also understand when something will happen.
I read this quickly to see if this was the piece that should convince me.
It was not. It is, IMO, a collection of strawmen.
People have abused OOP? Yes.
But
- citing FizzBuzz Enterprise Edition (which is really funny even for us Java/.Net developers because it is so horribly wrong)
or writing this
- Because OOP requires scattering everything across many, many tiny encapsulated objects, the number of references to these objects explodes as well. OOP requires passing long lists of arguments everywhere or holding references to related objects directly to shortcut it.
again IMO, demonstrate that the author never really understood OOP.
What probably is true however is that a lot of people should unlearn the OOP they learned in school.
>- Because OOP requires scattering everything across many, many tiny encapsulated objects, the number of references to these objects explodes as well. OOP requires passing long lists of arguments everywhere or holding references to related objects directly to shortcut it.
>- Because OOP requires scattering everything across many, many tiny encapsulated objects, the number of references to these objects explodes as well. OOP requires passing long lists of arguments everywhere or holding references to related objects directly to shortcut it.
again IMO, demonstrate that the author never really understood OOP.
That's a "No true Scotchman/OOP" fallacy. The OOP that the author "never understood" is what we see ALL the time in enterprise and startup code.
There could be a better OOP (e.g. Alan Kay's definition of it), but that's not what people are taught or practice.
The OOP that the author "never understood" is what we see ALL the time in enterprise and startup code.
Well, is there a paradigm where you tend to see mostly good code? If so I’ve never come across it. Besides plenty of bad OO, I’ve seen bad functional programming, bad reactive programming, very bad state machines. Bad code is bad code.
(And what other categories of code are you thinking of, besides “enterprise and startup”? Those two would seem to cover a pretty wide range.)
Yes, people have abused OOP and every other programming paradigm or tool as well. It's probably a necessary aspect to learning something new... do something contrived, atrocious, and trivial in order to understand the concepts.
The OOP naysayers seem to often knock the textbook examples, but there are a lot of creative things that can be done with OOP that get around the criticisms.
> again IMO, demonstrate that the author never really understood OOP.
I find arguments like this (legitimately fascinating). Obviously an amount of investment into learning common concepts is required. At the same time, if a topic is too complex for most to "truly understand" than it isnt useful.
How do we know the difference? What standards do we hold other coders to, and what expectations do we hold ourselves to?
I'd love to hear if there is much research on the topic. It is easy to find opinion articles, hard to fund data.
I also want to strongly recommend Martin Fowler's "Refactoring" since it covers a lot of "good" OOP design and gives practical advice on how to take bad code and make it better.
The original "gang of four" "Design Patterns" book is also pretty good, although definitely less practical than the other two.
I wish they would have taught me this in school. Instead, I learned that a cat is an animal, and a car has wheels, and a car is a vehicle. But not what any of these things have to do with one another and how to actually use them.
SOLID still doesn't accomplish what the author is criticizing in my opinion.
If I have a SOLID codebase, how do I unroll the object graph to get, say, a hydrated JSON representation of that? (this is the data focus the author is talking about).
I think a person who has maintained an ORM like Hibernate or Rails and wants a simple data projection without having to use their specific tools to do so implicitly understands this pain. A data oriented approach where records are passed through to the business logic as necessary doesn't have this problem.
SOLID still has things like "cat.drivesIn(car)", and so even if the concretions are supposedly hidden behind an abstraction of that interface, the coupling is right there: "drivesIn" irrevocably binds a cat and a car together. In a data oriented, more functional approach, there is a function which happens to know something about cats and cars, and it pulls in two separate records to do its work. This is more of an a la carte approach to building relationships, whereas SOLID makes every consumer potentially need to worry about that because the coupling is in the contract.
The I in solid sounds like a major issue WRT a cross-cutting concern like security, in which hermetically sealed should mean airtight, not a piecemeal approach, etc.
Smalltalk does "OOP" properly because it focuses on message passing, the real benefit of "OOP". Any OOPL that isn't based on message passing is degenerate.
Going by what you write here, we should all praise PHP and Javascript.
Popularity amongst the masses of 9-5 brogrammers who code for money and have no artistic sense when it comes to programming doesn't really imply quality. All the programming languages that are truly technically great - today - have small userbases. They're not easily digestible by mediocre coders, require some upfront effort and an open mind. Since the vast majority of professional programmers today are average at best, languages like Erlang, Haskell, Ocaml, Lisp, Smalltalk will never be popular.
It follows then that when looking for quality one should actually reverse popularity.
While it's true that popularity does not equal quality, one should be careful to go full tilt in the other direction and assert that non popularity means quality.
Objective C and Smalltalk were decent languages twenty years ago, but terrible languages by today's modern standards, starting with the fact that they're both dynamically typed, which we know today, is an evolutionary dead end in PLT.
Unfortunately, no. While he hasa lot of good arguments, they're drowned out by his repeated use of logical fallacies to push his point across. This long, rambling piece simply comes off as him bullying the reader into agreeing with him.
Note that I'm not saying he doesn't have valid points hidden in that wall of text, but he's certainly lacking the social skills and reader empathy to land a good argument, and as a result its too exhausting to take him seriously.
While this is obviously wrong, it does serve to illustratea point: when dealing with controversial subjects, there is a tendency towards tribalism. When you're for one "side", the other side is by definition wrong, and you must defend your "side" at all costs. Any criticism of any aspect of your side is an attack against the tribe and by extension, you. An attack requires a counterattack, and since at this point the person is engaging their amygdala, even their most incorrect statements seem completely factual and rational to them.
It makes things like editors, languages, syntactic style, and development paradigms impossible to discuss rationally and reasonably.
Maybe you should go back and read it slowly, as it was a good argument.
> again IMO, demonstrate that the author never really understood OOP.
This is what they always say...strange how OOP is the one paradigm no one ever seems to understand, no matter how much is written about it. Seems to me like there isn't actually anything to understand.
Maybe you should go back and read it slowly, as it was a good argument.
Don't be a dick.
This is what they always say...strange how OOP is the one paradigm no one ever seems to understand, no matter how much is written about it. Seems to me like there isn't actually anything to understand.
It's not. Developers from various backgrounds frequently fail to understand all different kinds of development paradigms, and I've seen _just_ as much awful imperative code as I have awful OO code.
I'm not really sure what that is. But I notice OO is unique in that "you don't understand it" is always the main defense. The trouble is that, when people can't even agree on a definition of OO, it is genuinely not clear that there is anything to understand. There is always some OO best practice that is ill defined and contradicts the advice yesterday. No one from OO land seems to enjoy math very much so you don't get precise definitions, you get "patterns" and "I know it when I see it".
But I notice OO is unique in that "you don't understand it" is always the main defense
It's really, really not. This kind of argument comes up in a number of places, and I think there's a commonality – it tends to appear where systems are flexible, open-ended, and easy to start using, such that people tend to pick them up without thinking about how and why they will be using the tool. You see the same thing with "agile" for example.
OO isn't complicated. At the most basic level, it's just taking a data structure that you might use in any other programming approach, and attaching functions to it as methods. Go is object-oriented in this sense, for example.
Problems arise when bad implementations of OO concepts appear – but pointing at say excessively-enterprise Java and saying "this is evidence that OO is bad" doesn't really hold any water.
> The vast majority of essential code is not operating on just one object – it is actually implementing cross-cutting concerns. Example: when class Player hits() a class Monster, where exactly do we modify data? Monster's hp has to decrease by Player's attackPower, Player's xps increase by Monster's level if Monster got killed. Does it happen in Player.hits(Monster m) or Monster.isHitBy(Player p). What if there's a class Weapon involved? Do we pass it as an argument to isHitBy or does Player has a currentWeapon() getter?
Not saying this is the right way to do things but, if you're actually going to go 100% OO on something:
Player.hits(Monster) returns Hit
Hit takes a Player and a Monster in constructor, and has with(Weapon)
You end up with:
Player.hits(Monster).with(Weapon)
Then Hit reaches into said objects and deals with changing the HP, and XP. You then have encapsulated the details in the actual Hit itself, which seems correct.
Great point... but doesn't it continue to prove the author's point that this is needlessly complex?
It strikes me as bad design if something as simple as an action (hit) now needs to become it's own class that is instantiated merely to execute a single function and then have to delete itself. That just feels like insane overhead/boilerplate, no?
The argument here is that it's far better to have player and monster be simple data structures, and a single function hit(player, monster, weapon).
The Hit class may be a Singleton and only instantiated once. Creating a class is like 3 actual lines. Also at least in Java the Hit class can be an inner class of the Person class. It just feels like name spacing in that case.
It is needless complex if the code is simple.
This solution is more flexible that just a pure function. If you have different players who use different hit strategies how do you handle that with just a pure function? Lots of if statements in the function? Passing in lambdas in every case you call the function? Then you could curry the function and call it something different too and that works I guess. OOP shines when you have lots of different ways things can work in different combinations. OOP is a tool like other programming paradigms, you use it when it solves your problem better than the other ways.
> If you have different players who use different hit strategies how do you handle that with just a pure function? Lots of if statements in the function? Passing in lambdas in every case you call the function?
I'll list a few strategies I've seen in different contexts:
1) Clojure has "protocols", which is a strategy of doing polymorphism where the verb is charge instead of the noun.
2) Rust has trait objects, where you have to implement the polymorphic behavior you want for a separate player type but the data is separate from the behavior and not interlinked or owned by the class. You can implement the behavior in the scope of the module where it is used.
3) You already mentioned this, but lambdas in Javascript or functors in C++ satisfy a lot of specialization requirements.
All 3 of these approaches are different ways of approaching the problem of different behavior on the same data. The key thing with them is that it allows domain specific behavior to be decoupled from an owning object. There may be reasons that OO is more suited to a problem (as you mention), but I tend to agree with the author's post around the dogma of using OO everywhere.
In the article, the author also mentions a quote around encapsulation
> Encapsulation is an object-oriented programming concept that binds together the data and functions that manipulate the data, and that keeps both safe from outside interference and misuse.
In my opinion, this view of binding the data and behavior together stems from an idea that data is scary and can be changed at any time. In an immutable, pass-by-value, functional world, those getters and setters and indirection is just another thing in my way from composing the data together with collections tools I am very familiar with. I think it was Neal Ford who said that "OO encapsulates moving parts, whereas functional programming removes moving parts." I think that's very appropriate in this context. In a data-first view, you model what properties and data shapes you care about in your records, and then you create little functions that connect those things together; in OO, you create the links first through methods (even if those have interfaces), and you're stuck with those connections anywhere you want to pass the object.
Ultimately the answer is to use a language in which passing a function reference is as easy as writing a singleton that implements an interface and passing a reference to that. (Or easier—in any language with decent function-passing support, writing a function and referencing it is generally going to be easier because fewer things are happening.)
OO makes sense if the different types of behavior map to object identities and inheritance trees, because it generates a chain of implicit if statements that capture this logic. If they don't—e.g., a hit changes behavior depending on whether the attacker is proficient in the weapon, rather than depending on the weapon class or attacker class itself—you're likely better off writing the if statements yourself.
> It strikes me as bad design if something as simple as an action (hit) now needs to become it's own class
Why, it's a noun in the problem domain. Why shouldn't it be a class?
> that is instantiated merely to execute a single function and then have to delete itself.
This may be inefficient in some language implementations because of limits of optimization, but there is no fundamental reason that, if that's all you do with it, the compiled code or runtime behavior needs to be much different than if it were just a procedure call.
On the other hand, lots of times you will want to do more than that with actions occurring in the domain then executing them (queueing, logging/serializing, etc.) and having a datatype for the event (a class in class-oriented OOP) supports that.
> The argument here is that it's far better to have player and monster be simple data structures, and a single function hit(player, monster, weapon).
Naturally it's better to be simple if the logic involved actually is simple, but surely the point of the example is that game logic typically isn't. Hits in games don't just change a hit-point value, they tend to play animations, trigger events, work differently depending on this or that piece of global state, etc.
It isn’t uncommon, nor is it bad design to define value classes that represent simple actions. This enables the action/command/event value to be reified, serialized, transmitted over the wire, deserialized, and potentially processed remotely, asynchronously, or by multiple arbitrary consumers.
Hit is an event that occurs, so not sure why it should escape OO even though in this example even though it's only decreasing ints. I routinely code event objects rather than modifying another class directly by the delta that should occur. OO isn't good for continuous time and events but it does add ergonomic convenience for the developer in this case (there could be other information on the hit, a historical list of Hits looks better as List<Hit> rather than List<int>, it's easier to inspect a hit object during debugging rather than setting watches on player/monster/weapon values).
In the OO version of this pathology, these arguments become the instance variables of a class introduced to do nothing more than hold them.
Poor design has a way of cutting across programming paradigms, or even (in the case of inappropriate inheritance, for example) leveraging their features.
It has nothing to do with efficiency. Insofar as Bassman9000's example indicates a real problem (and there are a couple of ways it might do so, ultimately leading back to the poor use of abstraction and separation-of-concerns), stuffing the arguments (or any other weakly-related collection of variables) into an ad-hoc struct (or object) having no real cohesion is simply sweeping the dust under the rug, and is just as indicative of a likely design problem as are long argument lists of nominally unrelated data.
Why is no cohesion, in, let's say a HitEvent object? This knows the target, source, method (weapon, magic, physics like falling down), etc. You can make handy helper methods and subtypes depending on important concerns.
I think games are a particularly pathological case for OOP, and as such probably not a good example for the article's case. But FWIW, the problem with what you describe (and with OOP for games generally) is that game logic tends to be way too polymorphic for code like that. That is, players don't just Hit() monsters, they also hit items, traps, breakable terrain, etc., and they get hit by monsters, by projectiles, maybe explosions, fall damage, etc.
And the dilemma of doing all that in OOP is, you find yourself with 20 different things that can receive a Hit(), that have little in common otherwise. Some have hit points but others don't, some don't have an armor value, some need to receive knockback but others don't even have a physics body, etc. As a result, do you make Hit() accept 20 different types, with special cases for each? Or do you rejigger your class hierarchy so that those 20 classes all inherit from Hittable, etc? Either could work in this or that case, but neither's much fun.
This is all why ECS (or other aspect-based approaches) are so popular for games - they let you define very general "Hit(src, tgt)" chunks of logic, that don't care what type each object is, but can easily query whether or not they have hit points or a physics body.
But again, I think games are a pathological case here and none of this should necessarily be considered an argument against OOP generally.
You can define iHittable and get the code to compile, but it doesn't make the actual issue (polymorphism) any easier to tackle. I think eschewing inheritance in favor of aspects is definitely the way to go.
(Of course, you can wrap everything in a fluent-style interface either way - that part is orthogonal to all the rest.)
OOP is here to stay because it is the default model of how we see the world. You actually do not have to teach people to use this, just how to map it to the OOP Software Systems like Java/ C#.
It was great while Moors law still lasted, and to write efficient software was becoming some strange quest for some formula one fields of software like games or other massive workload fields like OS-Wizzardry.
Today, the massive workload is still growing, due to sloppy architectural habits and user demands, but the saviors of parallelization and new chip technologies have not come to rescue the sloppy-devs from the laws of leaky abstraction.
Your code runs on a hardware system.
And currently your code is running out of hardware.
And with that OOP is running out of excuses.
So you would write this.
Then it would three months before the shipping be handed to some adult- and this adult will delete it.
All of it.
Its full of references, full of class-bloat which will push one another out of the cache.
Even though the compiler will try to yank the worst out this mental crutch out of the code. Some references can not be dismissed deterministically, so they will be kept.
Which makes it incredible slow.
What replaces it will be a efficient structure, keeping all the chunks necessary for collision detection, in one very array like field.
Over this a Algo traverses, acting with a function, which does not reference non-local stuff, detecting collisions and damage. Allowing not only for a few players- but a workload of players.
It will look very ugly, like C, with inline assembler- ugly, and it will be ready to ship.
> OOP is here to stay because it is the default model of how we see the world
A common but false statement. Firstly, there is no standard definition of OOP, only loose sets of features that define various overlapping but disjoint paradigms that some subset of people call OO. So your statement is meaningless as there is no single OOP that we identify as how "people see the world".
Secondly, even given any specific definition of OOP, it's also false. There are many programs that simply aren't suited to OO modelling, because you want to express things as the problem being solved actually requires, and this is often not OO. Sometimes pattern matching is best (compilers), sometimes reactive/event paradigm is best (servers, UIs), and neither of these are OO, as but two examples.
And don't take my word for it, there are a few studies on novice programmers showing quite clearly that event-based temporal reactive primitives are what people find most intuitively natural, ie. when account balance <= O then do some action, which is a declarative reactive action that will execute when that condition becomes true. This is not an OO program in any sense.
A lot of languages are made for this kind of interaction and can inline stuff hopefully, but that's the job for the VM or compiler.
I didn't say it was the "right" way to do it. Also programming is the art of tradeoffs. If this does actually give bad performance, maybe it's a trade off the programmer is willing to make.
Just like the article started off with "I don't think there's a silver bullet". OOP is a tool just like any other with pluses, and minuses.
Also the Hit class may be a Singleton and only instantiated once.
>And this one line is why you're willing to setup all these crap boilerplate supporter classes and take a huge hit in runtime performance?
Hits don't usually happen that often in games relative to everything else the engine is doing unless you're writing something like MMO server code that processes hits from thousands players.
Yours is a classic case of over-optimizing, I think.
These examples are supposed to be examples, not what we would actually do in this specific case. In a huge enterprise system that has 500 different ways something could be done, in which your function would have 30 million if statements, all which is dependent on runtime behavior, polymorphism can help solve that problem.
Imagine there are hundreds of ways Hit can be implemented and used in different situations. That's the reason for breaking out this type of abstraction. I agree this example the author used isn't probably a clear one in which one should use OOP.
> Imagine there are hundreds of ways Hit can be implemented and used in different situations.
This is the typical argument you get to hear from OOP apologetists. And the codebase is rotting, just for concerns about hypothetical problems...
I remember one time when I was criticized for putting in too much global data and not enough classes "because what if we have two GUI instances?". The guy of course had no idea why he would want that, and how it should work.
>I'm talking about simple, procedural hit(monster, weapon, damage).
The moment your game needs to know anything else about that hit event, you're going to start creating side effects from that procedure and that QUICKLY snowballs into spaghetti code. Not to mention, just from the signature, that's going to be an absolutely huge procedure.
>Because it's not worth the effort. You won't notice any difference in speed. Be pragmatic.
That's the exact same case I'm making. A hit object every so often will have no impact on performance (certainly not a huge impact) like you originally said and you gain developer productivity.
Now an environment flag causes fire damage to deal +10% more. You're either going to have to extend the hit method again (further complicating it), or just pull that flag from the environment class in the procedure body itself (undeclared dependency). When something else outside monster/weapon/damage/environment has to change the hit calculations, you'll have to extend it again and again.
>And yes, the procedure probably has "side effects". Because that's the point.
Side effects are terrible for maintenance and debugging, you shouldn't be using that to defend your argument here -- unless you don't know what side effects are which is why it's in scare quotes?
>How do you intend to write less code with OOP?
The point is for it to be more human readable because it's unlikely there's performance impact in this case. You don't want to count code quality by lines/characters of code.
> When something else outside monster/weapon/damage/environment has to change the hit calculations
When something should change, you edit the freakin' code. There is no way around it. If you have different kinds of hits then you make different procedures. e.g. magicPixieDustHit(monster, pixieDust, weapon, damage).
> Side effects are terrible for maintenance and debugging
FP apologetists want to make you believe that, but side effects are the actual point of the program.
>When something should change, you edit the freakin' code. If you have different kinds of hits then you make different procedures. e.g. magicPixieDustHit(monster, pixieDust, weapon, damage).
I dunno, maybe aim for something a little better than writing an exponential amount of functions for all of your interactions.
>FP apologetists want to make you believe that, but side effects are the actual point of the program.
If you want to make all of your co-workers hate you because randomProcedure changes shared state when it shouldn't, go right ahead. If you want to stop that, you're like two steps away from OOP's dependency injection when you write validator functions on the shared state object.
>I mean at least acknowledge that that's a highly subjective statement...
Games are about the worst choice for criticizing OOP because OOP is so useful for describing games. Procedural style is fine for Pong or Sudoku or really simple games, I guess. It's highly subjective in-so-far as the entire industry has mostly adopted it and released great games with it.
> I dunno, maybe aim for something a little better than writing an exponential amount of functions for all of your interactions.
You mean "polynomial". And you are still not acknowledging that OOP doesn't help there either. There is only one way out: You as the programmer must do the sensible thing. If you really need runtime polymorphism (which you rarely need, if you structure things correctly and keep separate things separate), then for god's sake go ahead and do it. But in my opinion it's much cleaner to do it with explicit function pointers.
> Games...
> the entire industry has mostly adopted it and released great games with it.
I think you are at least 10 to 20 years late. The games industry, especially in the AAA sector where performance is critical, seems to long have acknowledged that OOP doesn't work out.
> And this one line is why you're willing to setup all these crap boilerplate supporter classes and take a huge hit in runtime performance?
at least in C++, why would they ? it'll all gets inlined and you can do it so that there isn't any memory allocations if you know all of your cases at compile time (of course most of the time you want to be able to add new weapons, behaviours, etc, at runtime but in game engines you generally use scripting languages for this anyways).
It's nice that some C++ compilers can optimize this out. It means that it only costs maintainer cycles (less straightforward invocation), and compiler cycles. As every C++ programmer knows, there is an unlimited amount of them.
I'm pretty confused by how you perceive this. I don't see "crap boilerplate supporter classes" – I see objects that encapsulate events and actors for all the usual OOP reasons.
I wonder if some of this disagreement is down to the way that different people abstract this problem in their heads.
// player.hits(monster).with(weapon)
add to class Player {
// XXX: Bad coupling Player -> Monster
// XXX: Bad coupling Player -> PlayerMonsterHit
method hits(Monster monster) {
return PlayerMonsterHit(player, monster)
}
}
class PlayerMonsterHit {
member Player player;
member Monster monster;
construct PlayerMonsterHit(Player player, Monster monster) {
this.player = player;
this.monster = monster;
}
method with(Weapon weapon) {
doTheActualFrigginHit(this.player, this.monster, weapon); // this line is all we REALLY need
}
}
I guess 15 lines (that do NOTHING) was not an overestimation.
They don't "do nothing" - they define structure that can then be used to handle more implementation later. It's a ridiculous, contrived example, and you know that.
Yes – this code is stupid if all you need to implement is a single line with the ability for a player to hit a monster. But that's never what you are implementing, is it?
If you want to delay the hit, i.e. transactioning / buffering / delaying: This is sound engineering! But that part is not even included in above boilerplate. It would be:
In case it isn't obvious you can do that with either implementation, the above OOP boilerplate or the above imperative 1 line of code.
> Yes – this code is stupid if all you need to implement is a single line
No, it's stupid because it doesn't do anything. The data structures are only for temporaries that will never have any other use. In the end it's going to be PlayerMonsterWeaponHit. So the other classes are just stupid, and no amount of context will change that.
"Then Hit reaches into said objects and deals with changing the HP, and XP."
This is not essentially different from what happens in non-OO code. OO purists would probably object to this as a violation of encapsulation, and would insist on objects receiving a hit performing the update to their state themselves.
Stepping back from this example and considering transactions in general, it is often the case that there are certain constraints to be observed, and in such cases, it is easier to verify the solution if it is done compactly in one function. In such cases, creating a class and instantiating it just to have that function would be pointlessly excessive obeisance to a principle, and making it a singleton would only underscore that point.
There are minor issues when you venture off and decide that it would be cool if a monsters could damage your weapon. Something like Monster.hits(Weapon).with(Weapon). You'll start wondering if Weapon should inherit from the player base class. If not, you'll spend time wondering how to share the code to keep things dry. Obviously there are many solutions even within OOP, but one might consider looking to ECS to remove code duplication and decouple things.
I think damage to the weapon could still be handled in the Hit class, because Hit has a reference to Player and Monster and Weapon. You could even create different types of Hit classes that could execute different kinds of Hit behavior like this.
In that case it may be better to rename it to HitStrategy?
>You'll start wondering if Weapon should inherit from the player base class.
Probably not. If you want to do that, they should both implement an interface (and perhaps share some code through composition) but code inheritance is likely not the right approach here.
The problem is that it's an artificial example, in reality you'll ave a lot of different monsters, weapons, and most importantly separate subsystems. I.e. physics processed in one pass, animation in another, same goes for custom logic and rendering.
Modern game engines use component systems that put data first for the same reasons author described in the post.
Been exactly there with a game. Solution? Take a data centered approach: a command object (just a function really) updates HP, XP, triggers visual effects etc. Monster and Player classes are just thin wrappers around plain data structures.
I tried the more ”OO”-approaches for all too long thinking it would lead me right. It doesn’t.
While this rant rung a bell at that time, i've always found that this rant was too easy. Java had non public class, annonymous class and import static at that time.
Nowadays, the Javaland has steal lambda and var from Scala, moving away from a real kingdom of nouns (partially, you still need those pesky functional interfaces).
That was great. I read it for the first time. Similar scenarios happen in so many other fields. Some bad idea takes hold. Then schools teach it. Then more people invest time learning it so that they cannot admit it is bad and this goes spreading like wildfire and become sacred...
I've never liked classical OOP much, but multiple dispatch is a lovely paradigm. One doesn't define _classes_ per se, but rather just plain old boring structs.
struct Player
xp::Int
end
struct Monster
hp::Int
end
function hit(p::Player, m::Monster)
p.xp += 10
m.hp -= 20
end
The nicest thing is how one one doesn't need inheritance to "add a method" to an object. One just defines my_function(s::String) to be whatever, and it doesn't interfere with anyone else's code.
As a non-OOP-thinker, this seems completely natural to me. The verb hit doesn't belong completely to one noun, so it shouldn't be forced to live within the struct/class with the data (which really is about only that noun).
What problem is solved by forcing the function to belong to one of the actors, as in joe.hit(tiger).with(sword) or something? People say things about encapsulation, but it seems here that hit may change the state of Player, Monster and Weapon, so their states can't be fully private. In this code you can also have hit(p::Player, w::Door), if this were not allowed then perhaps you'd have to have separate hit_player_monster and hit_player_door functions... is that the problem?
What do you mean? It wouldn't be "if". You'd just have
function hit(p::Player, m::Goblin)
...
end
function hit(p::Player, o::Ogre)
...
end
In Julia, there's limited inheritance, so you could group monsters in a hierarchy to limit the redundancy (eg. Goblin <: Monster <: Creature). Or you can use multiple dispatch like a trait system, to get even greater flexibility.
You can have free functions that operate on your classes instead of shoehorning every operation into one class or another or creating new ones from whole cloth just to hold a function (e.g. the Hit class mentioned earlier).
Writing methods makes it easy to add new types but not methods; you have to change every class to implement a new method. Writing functions makes it easy to add new function, but you have to update every function for a new type.
Each has its place, and issues arise when certain languages (e.g. Java) or paradigms ("classical" OOP) make it impossible to use one or the other style.
"struct" here is meant in the same sense that the author of the post refers to "PoD objects". Just data in public fields.
Whereas a "class" is usually thought of as private data exposed only through methods.
There is no language that enforces this distinction; it's purely by convention. C++ makes it a little simpler by having a different default access level.
In any case, you are being needlessly pedantic and missing the point.
Above example is from julia. Julia structs have C layout and have no member functions.
The analog of "classes" in julia are "abstract types"; e.g. `AbstractArray{T}` is anything that implements the AbstractArray interface and holds objects of type T. Abstract types have no instances (all objects have a concrete type, that may be a subtype of an abstract type).
Binary compatibility between class and superclass is a cool feature in many OOP languages. This allows fast dynamic dispatch via vtable and lots of shared binary code for non-virtual methods (you can always memory pun from class to superclass). Julia does not support that feature: your different methods can share source code, but they get compiled separately. This is good for performance (more aggressive inlining) and bad for compiling small shared libraries (it is very painful to create julia apps/libs that work without invoking the JIT-compiler at runtime; can be done via custom sysimg). In some sense, julia is not very well suited to closed source business models.
A lot of these initial points I don't think are relevant -- you can model your data in objects, data structures are complex because business needs are complex, data models wind up having implicit graph dependencies as well...
BUT, "cross-cutting concerns" is where I think the main valid argument is. In my experience, OOP is just way too restrictive of a model, by forcing you to shoehorn data and code into an object hierarchy that doesn't reflect the meaning of what's going on.
So I totally agree with the conclusion: just store your data in "dumb" arrays with hash tables or database tables with indices... be extremely rigorous about defining possible states... and then organize your functions themselves clearly with whatever means are at your disposal (files, folders, packages, namespaces, prefixes, arrays, or even data-free objects -- it all depends on what your language does/n't support).
Exactly! So long as the developer approaches the solution with the expectation of scaling and extensibility from the get, this should not be a problem.
Nope. Right there at the beginning is where the author goes off track.
Computation itself is the most important aspect of computing. Code and data are just complexity to manage.
> Do I have a Customer? It goes into class Customer. Do I have a rendering context? It goes into class RenderingContext.
I whole heartedly agree with this. The naive approach to domain modelling is to classify the primitives of a domain into classes and stop there. In actuality, the processor of those primitives is likely what your class should be, and those primitives ought to be methodless data structures.
I.e., OrderFulfiller instead of Customer and Part classes.
>Computation itself is the most important aspect of computing. Code and data are just complexity to manage.
Of course you're gonna write some computation, else there would be no program. That's not the point here.
First, author doesn't mean "data" as in what comes in, it means the data structures of a program.
Second, for the purposes of designing a program (and its computation part) data structures are a better guiding principle than objects. That's the argument being made.
"I whole heartedly agree with this. The naive approach to domain modelling is to classify the primitives of a domain into classes and stop there. In actuality, the processor of those primitives is likely what your class should be, and those primitives ought to be methodless data structures."
This is what I did not have the ability to articulate as well in an earlier comment. As far as I understand parent, the takeaway is that often OOP goes astray when the developer is unable to identify that a given need can be handled by generics/a parent class and instead instantiates their own class.
The problem with OOP is that it became so ubiquitous. Everything had to be OO, millions of hours spent trying to fit everything inside absurd taxonomies.
There's good bits in OO. You can find articles about how parameterizing large `switch` statements into objects can lead to obvious improvements.
My only conclusion is to bet on biodiversity~. I learned so much in relational (db) logic, functional programming, logic programming, stack languages (forth threaded code) etc etc. As soon as your brain sense something useless discard it and find another cool trick/theorem to grok.
I think for enterprise type software OOP works well. It easily allows us to re-use code and solve common problems once in a parent class and have that solution easily propagated to child classes.
However, when developing a video game, I ran into quite a few OO design conundrums that IMO were the hardest programming problems to solve in my career. I started looking into data driven design, and while I never changed my code to implement it, it looked like it might have been easier for the video game. I do not know for sure. But I do know that getting OO right in the video game I was implementing was daunting. Maybe I was doing it wrong. The one issue we kept running into was how to design it so that the flow of dependencies flowed in one direction. That is to say, classes should not require references to classes that were higher up the food chain, and vice versa. It sucked when you realize that your Bullet class requires a reference to the BattleField class when the BattleField object was not being passed down through all the intermediate objects that separated the two. I would be willing to say that it could have been poor design, or rather not realizing that dependency earlier in the process to deal with it. But many things we did not know till the requirement or change came up. Then it was programming somersaults to deal with it. Eventually we did get better at re-arranging things as things came up, basically we got used to having to change a lot of the design at a drop of a dime.
I do not know if data driven design would have helped, but it did sound like it was worth a shot. I must admit though, I do remember a data driven program i worked on, and it bothered me how much data had to be passed around that was not relevant to the method/class that was using it. And a lot of data got lumped together out of convenience.
This article seems to fit the template: Here are some abstract reasons why paradigm X is bad, and here is a class of problems that have a more straightforward solution in paradigm Y, therefore paradigm Y is better than X.
The real message here is that if you have a problem that nicely maps onto a relational database then use the database-like approach instead of OOP.
In my domain, I work on algorithms for a very specialized class of graphs whose structure and manipulation must obey a diverse set of constraints. I have tried implementing the core ideas using multiple popular paradigms but so far I did not find anything better than OOP.
Bashing imperative + structured + OOP is valid if you have a viable alternative. That viable alternative is proper namespacing, modularity and functional programming.
If your alternative is another form of spaghetti your problem is not OOP. Your problem is the way you build abstractions.
If your procedures and functions, the foundation of your program, are poorly thought, then you laid a shitty foundation for everything that follows.
1 is pretty obvious, and for a programming language this means communication between a person to a computer
2 might not be as clear, but if you know that people cannot count in languages that have no numbers, it becomes obvious. There is a tribe that only has 0, 1 and many, and guess what, they can't tell the difference between 7 and 8.
Now back to programming languages and their communication between computer and programmer: the old programming languages were very close to the computer. As languages evolved, they started to be become 'human', where it is easier for us to read and write them.
OOP in that sense leans very close to concepts of normal humans. Objects, things objects can do, objects have separate responsibilities, etc. It's easy for a human to have such a model inside his head, because we already do this every day.
Now as a programmer, most of the things that I need to do is make a representation of the real world into a program. Since the real world is made up of things that do stuff, it's easy to model it in such a concept.
Most arguments against OOP always come from either a theoretical or academic background.
But in the real world, with real companies, real problems to solve and real programmers, OOP is used. Because it lends itself really well for representing the real world in a computer model.
EDIT: not saying that anyone that doesn't use OOP isn't a real programmer. But those people are more into the algorithmic or mathematical problem space, not a problem space where a real-world concept needs to be modeled. Most software is like the latter, and therefore most programs are OO. Is it the best solution for everything? Definitely not.
OOP in that sense leans very close to concepts of normal humans.
Objects, things objects can do, objects have separate responsibilities, etc.
This is not entirely accurate. Human languages are closer to functional languages: they have verbs that operate on nouns. Verbs are not attached to nouns, but rather, the operation of the verb is dependent on the noun.
I think the claim may have been that the brain is usually doing something more akin to "noun.verb(...)", rather than "verb(noun, ...)".
It's a very interesting question. The concepts of agency and intention are very important in human cognition. If we hear a sound, we wonder if some intentional agent (predator, enemy, etc) that we need to be aware of caused the sound. Or if it was something inanimate like the wind rustling a tree.
The OOP paradigm seems to map more closely to agents taking action. But perhaps my speculations aren't well grounded. I'm not sure.
It is a very interesting question. As an FP fan, I'd argue that we model the world primarily in terms of the actions we want to perform, with the objects secondary. But I'm obviously biases.
Furthermore, perhaps how we actually model the world mentally is or should not bear much of a relation to how we do so in code.
I haven't made my mind up about these things, even though I lean towards the FP approach in my day to day coding. But I love (constructive) discussions about the issue because somehow I feel they're about more than just 'making shit work'. Aside from the occasional flame-war I really like the discussions on HN about this stuff, and usually there are at least a few comments that give me new insight into both OOP and FP.
Heat 2 tablespoons olive oil in a large skillet or paella pan over medium heat. Stir in garlic, red pepper flakes, and rice. Cook, stirring, to coat rice with oil, about 3 minutes. Stir in saffron threads, bay leaf, parsley, chicken stock, and lemon zest.
I'm not saying there's no place for OOP in the English language, but I strongly disagree that OOP always, or even most of the time, reads as OOP.
> OOP programs tend to only grow and never shrink because OOP encourages it.
Most long-lived programs tend to grow, because people add new features to them. This isn't something unique to OOP.
As for growth of OOP programs in particular - does no one ever refactor anything? Shrinking OOP code through refactoring is a daily occurrence at almost every job I've ever had.
>The vast majority of essential code is not operating on just one object – it is actually implementing cross-cutting concerns. Example: when class Player hits() a class Monster, where exactly do we modify data? Monster's hp has to decrease by Player's attackPower, Player's xps increase by Monster's level if Monster got killed. Does it happen in Player.hits(Monster m) or Monster.isHitBy(Player p). What if there's a class Weapon involved? Do we pass it as an argument to isHitBy or does Player has a currentWeapon() getter?
As an indie dev this is something that I struggled with early on and my solution so far has been to choose the most obvious place where all those things should happen and just do it there (in this case it would be on the Player, in other less obvious cases it gets more fuzzy). What does the non-OOP solution for this problem look like?
> when class Player hits() a class Monster, where exactly do we modify data? Monster's hp has to decrease by Player's attackPower, Player's xps increase by Monster's level if Monster got killed. Does it happen in Player.hits(Monster m) or Monster.isHitBy(Player p). What if there's a class Weapon involved? Do we pass it as an argument to isHitBy or does Player has a currentWeapon() getter?
I don't see how this is a problem unless you think programming objects correspond with physical objects.
My first thought is make a separate Swing class representing the player swinging their weapon. There's probably an even better way to do it but this gets around the issues mentioned above.
class Player {
fun takeSwing {
swing = new Swing(
this.currentWeapon,
this.location.offset(this.direction)
)
if swing.killedMonster {
this.xp += swing.monster.level
}
}
}
class Swing {
fun new(weapon, location) {
this.weapon = weapon
this.monster = findMonster(location)
if this.monster != null { this.monster.takeHit(this) }
this.killedMonster = this.monster != null && this.monster.dead
}
fun damage {
return this.weapon.baseDamage
+ this.weapon.bonusDamage
}
}
class Monster {
fun takeHit(swing) {
this.hp -= swing.damage - this.defence
if this.hp <= 1 { this.die }
}
}
Good example, I think that's why people struggle with OOP, they do think that "programming objects correspond with physical objects". This is the same reason they struggle with storing states in databases because they try to map OOP to tables.
>I don't see how this is a problem unless you think programming objects correspond with physical objects.
This is a problem because we don't just want "any old design that sorta kinda works" but a guiding principle to our design, and to find the optimal place for each action/data.
> What does the non-OOP solution for this problem look like?
function hit(player, weapon, monster) {
This is a major problem with single dispatch, a popular OOP implementation choice, but not the only one. Common Lisp and C++ have multiple dispatch, which is a generally accepted solution to this problem.
It seems like a lot of the challenges you're facing has to do with object-oriented design not being a good fit for your problem. Some problems really do organise well into independent actors, maybe the one you're doing isn't one of them?
I don't think C++ has a dynamic multiple dispatch mechanism. You can craft one with the visitor pattern or some precompiling or macro sheanigans. I'd love to hear about the best way of doing it.
No. Not dynamic. The best way is to use Common Lisp.
I don't know if there's consensus for other, lesser solutions. Stroustrup seems to think C++ just isn't good enough[1], but good ol' function overloading is good enough for the cases we're discussing.
Diagree. Unity works like this, and it’s great for attaching multiple behaviours to an object (i.e. a Player can be Hit, but an Enemy as well). It saves you having to make up a inheritence hierarchy that never seems to work out (Player extends Hittable or something? But it also needa these 10 other behaviours...)
OOP is not a silver bullet but for some classes of problems it's the best tool available.
That's the reason why all good rich GUI frameworks are OOP-based, including HTML DOM we use on the web. GPU APIs, OS kernel APIs are OOP-based as well.
Yeah, that's a fair point. React isn't purely functional, but in practice I'd say it still leans heavily toward a functional approach
You're encouraged to keep state only in top-level components, or in a functional-style state management library like Redux, and pass the data as props/parameters to pure components that are just functions. There's a huge emphasis on immutable data, and composing your various components/functions, passing them as props, etc.
The fact that React switched to using classes makes it seem less functional than it is, and honestly I'm not entirely sure why they decided to do so.
Anyways, you're not wrong, but the point I was trying to make is that React is definitely much more functional than other/older approaches to GUIs, and is popular in (large?) part because of that difference in approach.
> Doesn't state grow unmanageably large for complex GUIs?
Yes, and there are various ways to make this less of a problem. Still, React generally favors explicitly passing props down the component hierarchy, keeping the actual UI bits pure functions and composing them in various ways that is typical of FP.
Perhaps I should've specified that React, as a very popular 'GUI' library, is much more functional than many of the very popular libraries that came before it.
Anyways, my point was not that it's the first of its kind, or 'fully' FP, but rather that it's an example of how FP-style UI libraries can be a good solution, and even be popular because they're less OO in nature.
> it's an example of how FP-style UI libraries can be a good solution
On the lower level, FP approach indeed sometimes causes much cleaner architecture, even for GUI code.
My point is, there’s no good alternatives at higher levels, where you want to build complex systems by combining components developed by different people/companies.
> and even be popular because they're less OO in nature.
That’s debatable. I don’t think the main reasons why React is popular are technical ones. Facebook is popular, and half year ago it’s market cap exceeded $600B. It’s $392B now but still it’s a huge company with 2.2B monthly active users. Many people want to achieve such success and view their technology as a silver bullet.
P.S. I’d like to add that OOP and FP are almost completely orthogonal. Here’s a good article about OOP in FP languages: https://medium.com/@gaperton/let-me-start-from-the-less-obvi... And many traditionally OOP languages adopted a lot of FP stuff: C#, JS, to lesser extent even C++ have now a lot to offer for functional-style programming.
Don't know why people keep insisting on this. OOP is a model. Functional is a model. Data oriented is a model. We as programmers just have to use them , in the most effective way possible to architect a solution to a problem. No one model of those can substitute any other. They complement themselves. There is bad OOP and effective OOP, just as there is bad Functional and effective Functional. The models are never bad by themselves. The programmers are.
Any programmer that bashes OOP in favor of any other model is just making a fool of himself and exposing his ingenuity.
There's something very important here. It also tends to be overstated. As a former OOP guy, I struggle with explaining what's going on with people that don't see it yet.
Perhaps beginning with praise might work best. OOA as a group analysis tool is probably one of the most powerful things coming out of computer science in the past 50 years. Oddly enough, nobody does it much. Many if the problems this author brings up with OOP actually work for the best in OOA.
It's not all bad. But there are problems with where we are. Big problems. We need to understand them.
> At its core, every software is about manipulating data to achieve a certain goal
> This part is very important, so I will repeat. goal -> data architecture -> code.
Wait, what? No. That isn't what you just said. You said my goal was my goal and manipulating the data was the way to achieve that goal.
Take Shopify. They had a goal: Make a ton of money by running ecommerce stores.
They used OOP. They IPO'd and they're doing great.
You can argue all you want about how they would have done better if they'd done some other programming style, but the reality is that almost every startup that I see win in fields like Shopify's (where there are a ton of different concerns with their own, disparate implementation specificities[0]) do so with OOP codebases.[1]
In large corps like Google non-OOP with typed languages like Go might work great. Streams of data and all that. But for startups it's too slow. OOP is agile because you get some data and you can ask it "what can you do?" and you can trick functional or logical programming languages into kinda doing that too, but they do it poorly.
[0] Even wording this in a non-OO way was a stupid waste of time. I could have just said "different models and methods" and 99% of the people here would have nodded and the 1% would have quibbled.
[1] Some startups like WhatsApp are a bit of an exception, but even YouTube used Python.
Sometimes the best way to decide what to do, is a OOP-like 100,000 line long legal code. Sometimes the best way to decide what to do, is something short and sweet yet possibly not entirely clear, like the Ten Commandments. When what the original designers chose was correct, everything will work quickly and reliably. When they don't, you'll suffer for a long time. Given that, you'll spend almost all of your wall clock time suffering and complaining about the designers selection being wrong. With a side effect of most of your suffering will be due to poorly implemented examples of the dominant paradigm. "AKA OOP SUX"
In summary, given all of the above, there are two true statements that OOP works AND simultaneously you'll spend almost all of your mental effort on OOP not working. Generally, OOP being inappropriately hyper dominant at this time, means that non-OOP solutions will utterly master some very low hanging fruit for first movers who abandon OOP.
I've seen some truly horrific object relational mappers trying to connect OOP to inherently functional software APIs, hardware device interfaces, "chronological engineering" in general, and persistent data stores. Not surprising if you're working in those areas, abandoning OOP will lead to massive success.
I'm not sure that I agree 100% with all the points raised in this article, though that's possibly just a reaction to what reads to me as invective.
Here's another bit of food for thought along those lines, though: If you take all the elements of what's typically considered to be good object-oriented design to their logical extremes, you end up with a bunch of objects that each have exactly one operation, and are configured at construction time. They may have some very simple internal state (think counters and caches), but you probably want to keep that to a minimum.
The end result starts to look very, very similar to functional programming. An interface with one method is essentially a function. Constructor arguments do basically the same job as closures. Etc.
When you're looking at things from that perspective, the big difference is that functional languages almost force you to work that way, whereas it takes consistent, conscious effort to do it in OOP.
> Object-oriented programming is an exceptionally bad idea which could only have originated in California.
> — Edsger W. Dijkstra
I don't understand why Dijkstra gets quoted so often when he has been demonstrably wrong on so many topics (see his opinion on BASIC too) and he was such a dogmatic and unreasonable person overall.
I pretty much believe that all we have are bad solutions. OOP is a bad solution that is acceptable at a set of problems. Data oriented designs is an also bad solution that is acceptable at a set of problems. Anyone who has ever used any paradigm for big projects can do a write-up about how bad that paradigm is.
The problem of OOP is not OOP, but actually knowing only OOP. That creates a lot of hammer and nail problems. The same would be true if data-oriented had the same popularity, and everything was data oriented.
So, unless you ARE going to give me a __good__ solution, a silver bullet, it is silly to state that a whole paradigm is an absolute inferior. Especially when all you can give me is examples of bad paradigm-tech-problem matches or snippets of incompetent usage.
> Instead of a well-designed data store, OOP projects tend to look like a huge spaghetti graph of objects pointing at each other and methods taking long argument lists.
Uh, what? FP projects are the ones with crazy argument lists, has OP even heard of the Law of Demeter? Does FP magically prohibit a huge spaghetti graph of functions and ad-hoc types pointing at each other?
> The main point is: just because my software operates in a domain with concepts of eg. Customers and Orders, doesn't mean there is any Customer class, with methods associated with it.
What an observation, it's all a bucket of bits so why name anything? Rub your hands together, mutter an incantation and voila, software without all that obnoxious structure!
When I'm coding, I often find myself in a situation where, in order to reach my goal, I need to use a "bad" language tool (e.g. eval), violate some principle of good coding (e.g. avoiding side effects), or just have to write ugly code. I've come to the conclusion that a lot of writing good code comes down to recognizing those situations and stepping back to find a way you can change your design or strategy to avoid needing the ugly code.
I've never quite liked OOP, but I struggle to say exactly why. I wonder if some of it is that the class structure and hierarchies makes it difficult to step back and change the design in order to avoid having to write ugly code.
This is coming from a business consultant (BA/PM and configuration primarily), but in my experience, it seems like the OOP paradigm is fine in most cases and is leveraged in a functional/compositional fashion typically, so I fail to see the issue with OOP a paradigm. Generics and polymorphism can be used to basically create a functional system that only uses domain specific classes to handle edge cases.
In short, OOP as currently used, in my experience working on numerous RIA/municipal/county government enterprise document management systems, seems perfectly adequate to the tasks at hand and can be used in an effectively functional way.
What is the point of OOP then, if you are using it in a functional way ? This just proves author's point.
Do you truly use it in a functional way though, that would mean, that you care about separation of state and identity. If you are mostly using immutable collectons/objects with pure functions, then it's fine, but this is not OO.
I think most people forget that Edsger W. Dijkstra was a computer scientist not a software engineer. He most likely wrote programs, but not build software systems. As much as I've enjoyed most of his writings and musings, one must bear in mind that they most likely apply to programs not software systems built in the large. If you are writing one little program and have one type of data, it's easy to begin with data. When you have a complex system operating on 500 different types of data, it becomes easier to encapsulate with OOP.
OO bugs that I see are actually in the “invisible” parts of objects, as these require more experience to know what is really going on. For example, failing to implement necessary operators (or worse, implementing them in ways that are subtly incorrect). Knowing how to implement object “glue” properly is something that just doesn’t come up if you’re using simpler programming styles, and in cases like these it’s better to pick the simplest and most maintainable approach that will suit the task to avoid pitfalls.
Should be pointed out that this refers to C++/Java/Ruby/python style objects, and not smalltalk/erlang style objects, which is a message passing model for data storage.
Could someone enlighten me as to how Java style OOP differs from Smalltalk's variant? I've read the Alan Kay quote on how he's sorry for focusing on objects instead of message passing but I'm still unsure on how Smalltalk is so much better not having used it.
This may not answer your question and may not be 100% correct.
Message passing means that objects have their own data and do not have shared access to it. This makes multi-threading and parallelized code very easy to do because you don't have to worry about data changing unexpectedly. Clojure, and other languages, achieve this by making data immutable, which has a similar effect.
As you mentioned, after Alan Kay used the term "object-oriented" to describe Smalltalk, people focused on the objects instead of the messages and created languages designed around the idea of being object-oriented instead of around message passing (I believe Objective-C and Ruby do message passing), thereby missing the benefit. All OOPLs not based on message passing should die.
Also, unlike Java, Smalltalk has no classes. Any existing object can be used as a template to create a new object.
I really think these kind of articles are valuable in the sense that they make us question the way we code our applications.
But I still think it's too easy to point to a problem without discussing possible solutions. I know the functional paradigm, focused on data streams and procedures, is where he is pointing at. But how does this facilitate the graph reference problem, for example?
The thing with oop, besides that is easy, is that is diffuse. Any o'll tutorial will have an order+client example.
I find that I generally agree. However, I have the intuition that good data-oriented design requires an exceptionally solid understanding of the problem domain.
You might say that any good design requires that. I'm not going to disagree. But if there's a place for OOP, maybe it is as a preliminary abstraction, something to hold the chaos at bay while we build our understanding of the task at hand.
Of course, preliminary abstraction has a nasty habit of becoming permanent.
I didn’t try to forget OOP per se, but I slowly adopted the authors guidelines regarding a data first approach. Now most of my objects are just wrappers around data, rather than analogies to real world objects.
However I mostly am working on web apps so maybe this approach is more suitable to such apps.
When working on desktop apps, during my initial days as a programmer, I found OOP based ui systems easy to grasp, but then this could be due to lack of experience.
It's crazy how passionately people get behind their preferred paradigm. I think they all have a place, and can all be abused severely.
I'm all for people —at least those who don't answer to me— picking one and running with it, but if you're going to try and tear another one down, at least make sure you understand what it does for the people that like it. TFA's author does not appear to.
Encapsulation never was my favorite aspect of OOP too, in fact I always try to create different classes for data and code - it brings in a lot of benefits. But OOP patterns described by GoF are brilliant things in terms of code reuse and maintenance costs.
Most of the problems pointed out can be reduced into "confusing indirection as abstraction", which is a sin of many OOP developers, particularly those obsessed with design patterns.
IMO, OOP is not inherently any more flawed than any other paradigm.
Interesting article. Though, the article should provide alternatives and speak to how use other types of programming paradigms might solve the issues that the person in the article is stating.
OOP is successful because it's very good at modeling the world that surrounds us.
As much as people like to hate on inheritance and recommending composition instead (a reasonable advice), the bottom line is that you will encounter the pitfalls of inheritance only on very rare occasions.
And inheritance enables so much flexibility and ease of maintenance that nothings comes close to it, not even FP.
Specialization and polymorphism(s) are the reasons why OOP is around, successful, and will remain so for a while. Anyone pretending it's bad or broken is just click baiting and does not understanding the fundamental issues.
Picking a satire piece on how something can be abused is not a great argument of "this thing is bad".
For example, I'd argue anything written in an OOP language is going to be more readable than this (from the obfuscated C contest)
https://www.ioccc.org/2018/algmyr/prog.c
Also, complex systems tend to benefit from OOP because complex systems can have multiple combinations of behaviors. Something IMO OOP is useful for.
I'm a simple man. I don't have an advanced education in computer science (I did have a SICP-inspired first course in programming as an undergrad and an algorithms course in grad school using Dasgupta and Kleinberg/Tardos).
Like everyone who's doing stuff with data, I live in Python now. I'm fairly conversant in functional idioms (wrote a monad tutorial when it was all the fashion), but like objects because they're a straightforward way to keep data and code together in a bundle that can be passed around and serialized.
The typical situation for me is machine learning - I could have "fit_to_data" functions that return matrices or otherwise sum/product types (as they called it in Haskell) containing the estimated parameters and "predict_from_param_struct" functions, but having the whole thing in a bundle erases the whole problem of having to remember a weight matrix comes from a logistic classifier and not a linear regressor. That, or use an idiom like
data LogisticParams a = LogisticP (Matrix a)
(I don't remember Haskell syntax anymore) but I'm not sure how I'm better served by that. The scikit-learn-type idiom where every regressor is expected to have .predict and .fit methods helps me build new regressors while reutilizing all of the scikit-learn cross-validation/hyperparameter search tooling. The idiom above would require me to study the hierarchy of types upwards and whatever advanced features of the language they're using; but even if the scikit-learn team is using features of Python outside my understanding (say, async, or low-level numpy/communication with C libraries) I'm still able to adhere to a convention.
---
That said, have y'all ever heard of Formal Concept Analysis [1]? It seems to me that a lot of the malaise about OOP is that tutorials and examples and maybe even production code (what the hell do I know) are struggling to define ad hoc concept hierarchies -- is it Kitchen.Lights.off() or Lights.off("kitchen")? What scikit-learn (and the universe around it that tries to adhere to the dialect) is simply to embed data (parameters which are derived from data) in code that knows how to use it. No one's trying to decide how TopologicalSpace.MeasurableSpace fits with IntegrableFunctions.ProbabilityMeasures...
> The vast majority of essential code is not operating on just one object – it is actually implementing cross-cutting concerns. Example: when class Player hits() a class Monster, where exactly do we modify data? Monster's hp has to decrease by Player's attackPower, Player's xps increase by Monster's level if Monster got killed. Does it happen in Player.hits(Monster m) or Monster.isHitBy(Player p). What if there's a class Weapon involved? Do we pass it as an argument to isHitBy or does Player has a currentWeapon() getter?
You have a function that has access to these objects. Based on that it will update the player and monster.
Function onAttack(from as player, to as monster, weapon)
player.addXp(weapon.xp)
monster.addHealth(-weapon.damage)
Probably you'll want to have immutable data instead of mutating this.
(I'm on mobile so I can't put too much code)
The reason you do this is to decouple code. You don't want the player to be aware of the monster, at least not in this scenario.
I completely agree. For a DataStore in Python, I may often use a Pandas DataFrame or such. For random label-based access it can have an index too.
Unfortunately, when asked to design an OOP architecture in a job interview, if you don't adhere to its religious enterprisy notions, you can risk failing the interview.
“OOP apologists will respond that it's a matter of developer skill, to keep abstractions in check.”
In my experience, the proliferation and use of ORMs makes keeping abstractions in check nearly impossible... in fact, I see ORM use as the primary design decision leading to the bastardization and convolution of sound OOP design.
Yeah I agree so much. OOP is good when you are in school and you a have perfect use case like Vehicule, Car, Bicycle, but when you work on something else, you end up with abstract, absurd and bloated objects. OOP is only profitable for IT consultants, because they can sell books and bill many hours working on this mess.
You want to know why OOP is useful and common? Because it's easy. Easy things allow for faster development, faster onboarding of devs. Humans need a mental models of things and OOP very explicitly gives it.
But like most easy things, OOP still works well enough even when it's done badly. That's not a bad thing. Working software beats non-working software.
We shouldn't be telling people "stop using OOP! It's only for idiots!". OOP will be here forever. We should be teaching people how to do OOP right, what the pitfalls are that lead to bad design, and how to take badly designed OOP and fix it.