OOP provides competing ways of abstracting behavior that in Haskell we can model with type parameters and constraints. Objects are not truly encapsulated in the way Erlang processes are and a poor fit for SMP. Objects also pathologically hide data in attempt to manage mutability, making it impossible to reason about the memory layout of the program.
All in all, OOP is a toolkit for building bad abstractions: abstractions that do not easily model computation, that hide data, and has tended to create overly complex solutions to problems that are often full of errors that a language focused more on type expressivity could catch at compile time.
When I read articles complaining about OOP, I just can't relate at all.
The idea of functional programming makes no sense to me except for writing very specific simple programs. For example, I like using some functional programming on the front end with VueJS but even there, I still allow mutations in certain parts of the code.
I like being able to store state in different instances and allow it to be mutated independently of each other.
I like it when related logic and state is kept close together in the code; it makes it easy to reason about different parts of the code independently.
On the other hand, with functional programming, it can be difficult to figure out where state comes from because there is very little abstraction to separate different parts of the code.
Most problems start with OOP when people try to take the longest possible path to achieve a goal. Enterprisey code, like having to deal with thousands of classes(AbstractClassFactoryFactorySingletonDispatcherFacadeInitializer types), dependency injection, design pattern abuse. Then on top of this comes things like Spring framework etc. At that point in time you have like two problems to deal with, one is the problem itself, and second is the complexity of the language.
This phenomenon has remained in the OOP world, almost forever. Things like maven have helped a little. But complexity hell has been the mainstay of OOP world for almost decades now.
Sure you can write very readable OOP code. Every time I do that I see great discomfort on the face of Java programmers during code reviews. For example over all this years, I haven't met a single Java programmer who could explain why Beans are good, or even needed. So you have these religious rituals Java programmers do.
They feel like their job protection scheme is at risk.
Yes, AbstractClassFactorySingletonWhatever classes do exist in the Spring framework, they are there to help abstract away the complexity and flexibility of the framework so you as the application programmer can have a simple, productive programming environment, and you can still reach into the complexity (and extend it) when you need to.
Beans, interface inheritance, implementation inheritance and dependency injection are the building blocks of a programming environment that allows me to be very productive, and still write maintainable, testable, extendable and configurable code.
"reusable"... reusable code? That was the original point of OOP.
After a few years being a believer n the late 90's as a C++ Ubermensch, I had the horrible realization neither I nor anyone else would reuse any of the classes I had given beautiful interfaces to, and carefully documented. Any time I spent planning for reuse I might as well have spent staring out the window.
Of course who needs to reuse your own classes when you have the huge canker Boost you can drag around with you from task to task.
You are either writing a library,a framework, or something like that. Or you are implementing some 'business logic'
The point of the former is reusability. The point of the latter is utterly not and you should not waste your company's time and money or worse let schedules slip because of it.
Some further thoughts over the years.
Reuseable code must have a public, well designed, sable, and documented API or it won't ever be reused. Be honest how many programs are going to reuse this code? Enough to justify all the above? Didn't think so.
One of the problems with reusable code is dependencies. OOP code bases tend to have more than the ordinary number of dependencies.
To practice OOP, its approach is every class should be a "library". Procedural code that doesn't have a class context is felt to be a shameful throwback to an earlier era degenerate C practices. From before some people feel they starting writing "good C++" apparently.
Compare with business logic. The problem is the spec and use case keeps moving. Moving way too fast to be a good library or a framework.
>>> maintainable, testable, extendable and configurable
but not reusable. I'm not blaming you, or disagreeing... C++ didn't produce reusable results for me either.
I dunno what you were doing 20 years ago, but I was doing this... there wasn't any problem with the classes being unclean or the boundaries cut in the wrong place. It was simply I was almost never going to use those classes a second time. If they had been C functions, I would not have needed them a second time, either, almost always.
After a year or two I realized that being the case, the entire loving OO encapsulation of then was not simply worthless but an active waste of time. And I went back to C.
If you get value from other C++ things that make it worth paying the price, that's great. But file away for a possible future horrifying 3am realization, perhaps nothing is worth the price of a gyre like boost, and just writing it in high quality C may be a better answer.
That's probably what functional programming, 100% code coverage and code linting trends are really about; allowing companies to not have to trust their engineers. I think this is a futile effort. If you want better code, just hire better developers with more experience who can be trusted. If you are not able to identify and recruit such people, then you shouldn't be a manager.
Bad developers will find a way to write terrible code; in any language, any paradigm, with strict linting rules enforced and even with 100% test coverage.
It really bothers me that most companies feel perfectly fine trusting financiers, lawyers and accountants with extremely sensitive company secrets and business plans but they absolutely refuse to trust engineers with their own code.
Some of what you're talking about can be overdone by engineers (i.e. 100% line coverage), but a lot of what you're talking about are tools engineers have developed to make their lives easier and to improve their code quality. A linter checks for common mistakes, clarity problems, and can help ensure code is written in a consistent style which improves readability. Functional programming is a coding style that doesn't always mesh well with the languages it's tried out in, but when it does it can provide enormous benefits in local reasoning about code and I think a large part of why it's seeing a resurgence is the experience of engineers having to build large, complex systems that become baroque and difficult to reason about in OO/procedural style.
This idea that good engineers have perfect competency and write code that immune from the problems these tools help solve is absurd and totally disconnected from the reality of the challenges involved in building software. Even the best, and most respected engineers write code that is riddled with bugs, and there are CVEs to prove it.
Engineering tools and new paradigms weren't invented to add friction and police developers, they were invented to aid our very real, human cognitive limits in writing software. If anything some of these things make the process more enjoyable and far less error prone.
I think that a lot of rules enforced by code linters are rules that are good 90% of the time, but they're bad 10% of the time.
Having 100% test coverage is great to guarantee that the software behaves like it's supposed to but it's terrible when you want to change that behavior in the near future. Most systems should be built to handle change. Having 100% test coverage disincentivizes change; especially structural changes.
Also, more advanced project management tools like Jira don't actually add value to projects; they just give dumb executives the illusion of having more visibility into the project - Unfortunately, they cannot really know what's going on unless they understand the code.
And I agree that project management tools (especially JIRA) are pretty frustrating and provide a dubious value proposition. Project management is going to need to have a reckoning some day w/ the fact that it's very difficult to glean any insight from estimates and velocity (and that very few places actually even measure these things in a consistent fashion). The only value to "agile" and the ecosystem surrounding it IMO is that it de-emphasized planning to some extent and pushed the importance of communication. These are ultimately human problems though, not ones that can be solved by technology. Also I think there's a laughably small amount of interest in giving software engineers the time to train and learn.
Software engineering as a discipline is still in the dark ages. We don't really have solid data on the impact of things like programming language choice, development methodologies, etc. Most of the conceived wisdom about these things is from people who are trying to sell their services as consultants and offer project management certifications. That's not to say that there is no value in tools, techniques, language features, etc. to software engineering, there's a huge value, but we need to better understand how contingent the advantages to these decisions are.
As always, I think the things en vogue in software engineering are somewhat of a wash. I'm very happy that statically typed functional programming languages have seen a resurgence because I think they offer an enormously better alternative to TDD-ing your project to death by allowing you to make error states unrepresentable and discouraging mutable state (also I think encoding invariants in types actually makes it faster to adapt to changing requirements, especially w/ the refactoring abilities in editors). On the other hand, there are lots of bad ideas about architecture and scaling today, particularly with the "microservice" trend and I think people poorly understand how these things were mostly created as a response to organizational growth, not per se in search of better performance or some kind of invariant best practices about architecture.
In any case, I think there is an inherent tension w/ management that has indirectly lead to some of these practices, but I would push back on the idea that we only adapt some of these to satisfy management. Tools that push you towards correct and safe code IMO make the job less anxiety-inducing and gives your code a chance to actually meet the expectations users invest in your project by using it. In my experience these are the kinds of things management couldn't care less about until it impacts their profitability.
Testability with the poverty of tools available in languages like Java is the single biggest driver, and the alleged induced "improvements" in extensibility and factoring makes people feel happy about making all their objects fully configurable, all their dependencies pluggable and replaceable so they can be mocked or stubbed, with scarcely a thought for the costs of all this extra abstraction.
Extensible and configurable code is not an unalloyed virtue. For every configuration, there is a choice; for every extension, there is a design challenge. These things have costs. When your extension and dependency injection points only ever have a single concrete implementation outside of tests, they bake in assumptions on the other side of the wall and are not actually as extensible and configurable as you think. And your tests, especially if you use mocking, in all probability over-specify your code's behaviour making your tests more brittle and decreasing maintainability.
And parameterization that makes control flow dependent on data flow in a stateful way (i.e. not mere function composition) makes your code much harder for new people to understand. Instead of being able to use a code browser or simple search to navigate the code, they must mentally model the object graph at runtime to chase through virtual and interface method calls.
I think there are better ways to solve almost every problem OOP solves. OOP is reasonably OK at GUI components, that's probably its optimal fit. Outside of that, it's not too bad when modelling immutable data structures (emulating algebraic data types) or even mutable data structures with a coherent external API. It's much weaker - clumsy - at representing functional composition, with most OO programming languages slowly gaining lambdas with variable capture to overcome the syntactic overhead of using objects to represent closures. And it's pretty dreadful at procedural code, spawning all too common Verber objects that call into other Verber objects, usually through testable allegedly extensible indirections - the best rant I've read on this is https://steve-yegge.blogspot.com/2006/03/execution-in-kingdo... .
Part of the reason why Go has picked up so well, And languages Python and Perl still such widespread use is because many people aren't developing monstrous megalith applications like in the pre-2000's.
You needed all these big complexity management features because people were padding every tiny little feature callable from 'public static void main', then you see there are some very tiny far abstract patterns of code that overlap in such a giant megalith, and then to make use of that you unload the entire design pattern text book to make it happen.
This was totally unnecessary. In fact its perfectly ok to have 5 - 10 % duplicate code if it comes at the expense of simplicity and maintainability of the remainder 95% fo the code.
The overall rise of micro services architecture and those trends are only going to increase this way of developing software.
>>Yes, AbstractClassFactorySingletonWhatever classes do exist in the Spring framework, they are there to help abstract away the complexity and flexibility of the framework so you as the application programmer can have a simple, productive programming environment, and you can still reach into the complexity (and extend it) when you need to.
I'm not sure, but there are other language communities which solve the same problem without writing 30 classes just to do variable++
>>Beans, interface inheritance, implementation inheritance and dependency injection are the building blocks of a programming environment that allows me to be very productive, and still write maintainable, testable, extendable and configurable code.
Yet to see such an environment.
It feels like Java community programmers create tons of complexity and then tons of complex frameworks to solve that complexity that shouldn't even exist at the first place.
I think the reason is this: you have a large sprawling codebase, consisting of tens of thousands of 'business rules' or more, and you have to find a way to allow cheap run of the mill programmers to make changes to those business rules in an approachable manner.
I don't think it's fair to evaluate enterprise OOP code based on the style you would prefer when undertaking a standalone project, even a large one, when one of the very problems that this enterprise style is trying to solve is how to have hundreds of fairly green developers work on the codebase.
Now, as an exercise imagine taking this social problem I've described above, and implementing it in your favourite programming style, be it lisp/scheme, functional programming, JS or whatever you prefer, and try to honestly imagine whether it would hold up any better under those conditions.
* Encapsulation might make it safer for programmers to modify code without affecting other areas.
* Inheritance will make it easier to set standards for large numbers of high turnover programmers.
and that then it grows complex for various mostly social factors.
The testable hypothesis in this is that if you started with another programming model, and applied the same social pressures to it, it would end up just as ugly and complex.
These codebases have hordes of programmers banging away on them, with little regard for the bigger picture which makes the code ugly and inconsistent.
Architects then react to this and attempt to enforce standards by introducing patterns and subclasses and facades etc, which is what makes it complex.
Basically, I'm saying that the main straw-man used to argue against OOP: the enterprise codebase is chosen for the wrong reasons.
Put any other programming style under the same pressures and you'll end up with a similar big ball of mud.
Java/C++ -> Rust:
It will feel familiar to you with its C style syntax and multi-statement function bodies. It takes the best parts of many different paradigms, and marries them in a very coherent way. It's performant, has incredible tooling (cargo, rustup), and the community is great.
Python/Ruby -> Elixir:
Elixir has the Phoenix framework, great support for web, and supports massive concurrency using the Actor model. Jose did a great job cleaning up the Erlang syntax, and it's gradual typing and heavy use of macros will feel immediately familiar to you.
Our backend services are written in Rust for things that need efficient computation.
If you have a project where a single person could reasonably understand all the components then maybe OOP is overkill.
I also recognize others may have had an equal and opposite experience.
> My personal anecdotal experience is that OOP lends itself well to very very large codebases.
In my experience Java / C++ style OOP creates very very large codebases in the same way. I think its quite surprising that many engineers don't seem to have any idea how much waste there is in their "clean" java code. (Or C++ that follows the same style).
I also almost always find it far easier to read dataflow oriented code or functional code compared to their OO counterparts because all the state transitions are made absolutely clear. In OOP, your system state is spread across every file in the project. Compare to, say, a website implemented using React/Redux. I can inspect a single object to see the entire state of the application. When there's a bug, usually either that object has the wrong value (so I know how to fix it). Or the object has the right value and the bug is in the rendered component. Easy peasy. In comparison, in a java program the bugs often show up between classes. They end up relating to the relative timing of function calls that mutate state. Or the lifecycle of objects conflicting in weird ways. Its much worse for comprehension and much more work to debug. (And yes, I've spent years working in both styles professionally.)
The context you describe being a teacher is exactly the kind of case I was suggesting OOP may not be beneficial. If you're in a programming class you are generally not maintaining code bases with millions upon millions of lines of code. In that case, in a case where a single person is likely capable of understanding the entire code base, OOP probably isn't bringing any serious benefits.
I think people also have different ideas on readability. Some prefer a single file with 2k lines and some prefer 40 small files in 12 folders with 50 lines each. Like the parent says you can often write the same code in 4x length, be it to prepare for future features or just to have it look "clean".
"Good code" is different for different people.
I suspect you could objectively measure this - take two implementations of the same problem; one small and one large but where the implementations do the same thing. For example, write a simple website using a stateful OO style. Then write the equivalent code using the mostly stateless react component style. The latter would be functionally equivalent, while using less code. The latter would also use much less local hidden state.
Then measure how long it takes new people to start making meaningful changes to the two respective codebases. I don't think its a matter of opinion or preference. I expect that the functional component model would come out as a clear productivity winner.
The easiest code to change is code you never needed to write in the first place.
My experience with OOP is that it generally drives people to very large codebases that are hard to reason about at scale. Usually it brings in a framework of some sort to manage the complexity of wiring objects together in a manageable way. For example, Java Spring uses annotations to magically wire objects together.
> OOP is prove a poor model for computation
> OO code is almost always significantly more complex and error prone than an equivalent computation written in a concurrent, functional, or structured paradigm.
These claims may or may not be true, but they aren't very useful at all if you don't provide a justification as well as merely asserting them.
> Recent trends in language design (see Rust, Go, Elixir) also seem to be abandoning OOP in favor of other models
There are a number of ways to account for these trends besides OOP being an intrinsically bad model. The most obvious one is that there are fashions in language design, and right now OOP is not fashionable. We already know that; it's not a strong argument against it. Ironically, many in the OOP opposition use the same argument to justify OOP every getting popular in the first place: "it was just fashionable."
> OOP provides competing ways of abstracting behavior that in Haskell we can model with type parameters and constraints.
> Objects are not truly encapsulated in the way Erlang processes are and a poor fit for SMP.
You have pointed out here that more classical OOP languages do things differently from Haskell and Erlang. This should be expected and is not an argument against those OOP languages. (Yes, you could say, "Erlang is better at concurrency" because of the way in which it's different—but my understanding is it's pretty well accepted that Erlang is sort of freak of nature here, so it's not a good argument against OOP generally.)
> Objects also pathologically hide data in attempt to manage mutability, making it impossible to reason about the memory layout of the program.
They do hide data, but the 'pathologically' is something you've added on your own. There is a design philosophy in which this data hiding plays an important, positive role. When you say, "making it impossible to reason about the memory layout of the program." —this sounds to me like missing the point of that design philosophy: the purpose (and oftentimes tradeoff) of higher-level languages is that you don't need to personally manage these details. I think it's largely an application-dependent thing: you many be writing code that requires that, but not all interesting software hinges on low-level performance tuning.
> OOP is a toolkit for building bad abstractions:
> ... abstractions that do not easily model computation
> ... that hide data
> ... and has tended to create overly complex solutions to problems that are often full of errors that a language focused more on type expressivity could catch at compile time.
Another collection of unjustified assertions, except the 'hide data' part which I accounted for earlier.
So across ~10 negative assertions about OOP you have 3 quasi-justifications: newer languages aren't using OOP as much, hiding data is bad, and Erlang is better for SMP.
I also suspect the problems with OOP are hard to communicate. I for one always had a problem with OOP, but I could never quite point it out. Sure, when faced with an OOP design, I could almost always find simplifications. But maybe I never saw the good designs? Maybe this was OOP done wrong?
I do have reasons to think OOP not the way (no sum types, cumbersome support for behavioural parameters, and above all an unreasonable encouragement of mutability), but then I have to justify why those points are important, and why they even apply to OOP (it's kind of a moving target). Overall, all I'm left with is a sense of uneasiness and distrust towards OOP.
That may very well apply to the GP's comment—but, my observation of the pattern is derived from a mix of mini-essay comments, and articles people are writing on Medium or their blogs or whatever, where the space constraints aren't so tight.
There are a couple things you'll regularly find: laughably bad straw-men (GP is free of these), overly vague statements that only survive scrutiny because of their vagueness (e.g. when the GP says OOP produces "abstractions that do not easily model computation"), and unjustified claims.
The net effect is something that sounds bad, but if looked at closely carries very little force.
I suspect the reasons for it are:
1) Actually evaluating a language paradigm is more difficult than these folks suspect. Their view matches their experience and they assume their experience is more global than it really is. Additionally, we don't have a mature theoretical framework for making the comparisons.
2) People are arguing for personal reasons. They have committed themselves to some paradigm and they want to feel secure in their justification for doing so.
This is really the problem. As much as I have strong opinions and beliefs about how to architect code, every argument I come up with boils down to some flavor of "I like it better this way". Which is true -- I do like it better this way -- but hardly actionable, and it doesn't get at the essence of why I like it better.
The problem with making everything an object -- or more precisely, having lots of mutable objects in an object space with a complex dependency graph -- is that it becomes very hard to model both how the program state changes over time and what causes the program state to change in the first place. I think the prevailing OOP fashion is to cut objects and methods apart into ridiculously small pieces, which takes encapsulation and loose coupling to an extreme. This gives rise to the popular quip, "In an OOP program, everything happens somewhere else." I can't think straight in this kind of setting.
I believe that mutable state should be both minimized and aggregated. As much as is humanly possible, immutable values should be used to mediate interactions between units of code (be those functional or OO units), and mutation should occur at shallow points in the call stack. Objects can work well for encapsulating this mutable state, but within the scope of an object, mutation should be minimized and functional styles preferred.
Using a functional style doesn't mean giving up on loose coupling or implementation hiding. Rust, Haskell, and plenty of other languages support these same concepts in the form of parametric polymorphism, e.g. traits or typeclasses. It does mean giving up on the idea that you can mutate state whenever it's convenient. Instead, you have to return a representation of the action you'd like to take, and let the imperative shell perform that action.
Speaking of imperative shells and functional cores, Gary Bernhardt's talk called "Boundaries" is an excellent overview of this kind of architecture . There was also a thread here on HN about similar principles .
Btw, one other idea I've had on the subject is that the problems with mutable state can be mitigated if we were able to more easily see/comprehend the state as it's being modified by a program; without that capability the only recourse we're left with is our imagination, which of course is woefully inadequate for the task. You can see more concretely what I'm talking about in my project here (video): http://symbolflux.com/projects/avd
From what I've seen, structuring a program to not modify state is almost always more difficult than the alternative. There are certain problems where this difficulty is justified (because of, e.g., reliability demands); but I think most problems in programming are not those, and if we could just mitigate the error-proneness of state mutation, that may leave us at a good middle ground.
 The exception is when you're in a problem domain that can naturally be dealt with via pure functions, where you're essentially just mapping data in one format to another (i.e. no complex interaction aspects).
> From what I've seen, structuring a program to not modify state is almost always more difficult than the alternative
You're not wrong! I don't think we should get rid of mutable state, but I do think we should be much more cognizant of how we use it. Mutation is one of the most powerful tools in our toolbox.
I've found that keeping a separation between "computing a new value" and "modifying state" has a clarifying effect on code: you can more easily test it, more easily understand how to use it, and also more easily reuse it. My personal experience is that I can more easily reason locally about code in this style -- I don't need to mentally keep track of a huge list of concepts. (I recall another quip, about asking for a monkey and getting the whole jungle.)
There is a large web app at my workplace that is written in this style, and it is one of the most pleasant codebases I've ever been dropped into.
I'll quote from the readme:
> This Domain/Converter framework is a way of being explicit about where the boundaries in your code are for a section using one ‘vocabulary,’ as well as a way of sequestering the translation activities that sit at the interface of two such demarcated regions.
Inside each Domain I imagine something like an algebra... a set of core data structures and operations on them.
But yeah, I have very frequently thought about visualizing its behavior while working on that visualizer :D
Is your research related to programming languages?
Also I'm going to have to think about "computing a new value" vs. "modifying state" —not sure I quite get it...
Yep: I just finished a Master's degree with a focus on programming language semantics and analysis. I'm interested in all kinds of static analyses and type systems -- preferably things we as humans can deduce from the source without having to run a separate analysis tool.
> Also I'm going to have to think about "computing a new value" vs. "modifying state" —not sure I quite get it...
It's kind of a subtle distinction. A value doesn't need to have any particular locus of existence; semantically, we only care about its information content, not where it exists in memory. As a corollary, for anyone to use that value, we have to explicitly pass it onward.
On the other hand, mutation is all about a locus of existence, since ostensibly someone else will be looking at the slot you're mutating, and they don't need to be told that you changed something in order to use the updated value. (Which is the root of the problem, quite frankly!)
There are plenty of applications I would never use OOP for. I use Elixir for my backend work. I tend to use OOP for interactive simulation/game sorts of applications.
OOP is popular in large enterprise systems because it (purports to) promote encapsulation and abstraction, which allows many teams/people to interact. Organizations may like OOP because it helps reinforce their drive for independence and fiefdom (see Conway's law https://en.m.wikipedia.org/wiki/Conway%27s_law).
Whether it is the best paradigm, the most practical, 'just good enough', the wrong choice is anecdotal.
Also, in experience many companies would rather fail conventionally than succeed unconventionally.
It’s easier to write bad code in some languages than others.
> Languages won't save you from poor design choices.
They do. They save you from entire classes of bugs, and make it less easy to shoot yourself in the foot.
Do we need to have this conversation every time we talk about language design? Do you think the tool you use to achieve a task doesn’t matter?
I think many people do think that, with a twist; specifically, when presented with a higher level language they'd argue "it's just a tool", while a lower level language would be "the wrong tool for the job".
I.e, for a hypothetical Ruby programmer, Haskell = "just a tool", C = "the wrong tool".
From my view, here are the concrete issues I have with some of the OO flavors out there.
1) It encourages shared mutable state.
OO creates a new layer of shared state, the object fields. In its very essence, the idea is to have methods which accesses and modifies the object state through direct access of its fields. Thus each field is mutated by the object methods which directly access them. You need to look at the method code to know what state they read and write too. You can not make the fields immutable, because that renders non read only methods useless. You must thus coordinate access of the fields between the methods using explicit locks. Over time, in practice, it also creates massive classes with too many fields and too many methods that only make use of a subset of them, degenerating into even more of a globally shared state structure.
2) It makes dependency injection trickier and thus discourages its use.
Once again, the idea of methods to have direct access to fields is the cause of this, it means that methods don't have their dependencies injected, instead they go find them themselves, through direct access. Testing becomes hard, configuration is pushed down the stack inside the methods and reuse is made harder.
3) It makes inversion of control trickier and thus discourages its use.
Because code can only be passed around wrapped inside an object, and objects are heavy constructs requiring a lot of verbosity to create: a new file is required, a class must be defined, an instance must be created, etc. It means that in OOP it is rare to see code being injected from caller to callee. Instead, conditionals creep inside the callee, and configuration parameters are passed it.
4) It handles stateless code poorly, and thus discourages its use.
Code that requires no state over time, aka stateless operations must be wrapped inside a class for no good reason. OO provides nothing to such code. OO is designed for statefull code. A class with no fields, it's not useful. Why do I need one if all my code is stateless? So people start using them like namespaces.
5) Inheritance is too easy to mess up.
Inheritance as a mechanism for code reuse appeared to be pretty smart at first. The problem is it turned out its pretty hard to do right. You soon find yourself with hidden state from parents and confusing override hierarchies, which forced us into single inehritance chains, which limited code reuse, etc. Bottom line, it just created ton of hidden coupling.
6) Objects are not extendable from the outside.
You can't add functionality or state to an object from the outside. You need to modify the source code of the class directly to do so, or rely on defining new subclasses through inheritance. This minimizes code reuse, and encourages object code to grow ever so bigger as more and more features are added.
That's all I can think of for now. Now, some flavours of OOP work differently in that some or all of these problems might not exist or have solutions to them. Which is why I agree with you, it's best to know the actual problems so you can spot them. Just saying something is OOP doesn't imply all of them will exist.
A lot of languages allow stateless subroutines to exist on their own and offer a seperate namespace system not conflated with the OO layer. Trait or mixin like systems enable open extension. Inehritance can support multiple parents, or composition is used in its place. Some allow subroutines to also be passed around, not needing to be wrapped in an object. Dependency injection frameworks were added to simplify and encourage its use. Value objects allow a bit more support for immutability. Etc.
FP doesn't suffer from these issues though. That said, it has others in its place, and different FP languages have also found different solution to them. That said, in my experience with many languages that were more OO oriented and ones that were more FP oriented, I found FP overall had less issues and had found cleaner solutions to its limitations.
OOP / imperative style is almost the norm nowadays, but that doesn't imply it's the "easiest", just the one that got the most momentum (which can be attributed to social factors moreso than technical ones), and thus is widely taught and talked about, which helps with education on such style.
"That's no banana; that's my nose! Acha cha cha cha!" -Jimmy Durante
> Because the problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.
"You need all the objects that come with it to make it work"
Maybe you should take a look at a real OOP language like Smalltalk before you tell us why OOP is a bad idea. Otherwise, I might tell you why consumers will never adopt cars as I drove a Trabant once.
> The Trabant was loud, slow, poorly designed, badly built, inhospitable to drive, uncomfortable, confusing and inconvenient. (source: https://en.wikipedia.org/wiki/Trabant)
Java and C++ are both great tools for specific jobs, but from an OOP perspective, they are badly designed. So if you want to discuss OOP as a design choice, it would be unreasonable to discuss it based on those poor implementations. Smalltalk, on the other hand, has its own set of problems, but a poor OOP implementation is not part of it.
Did you have many years experience with Java/C++ beforehand?
The world is built on OOP, it's just that too many like to use it as a punching bag.
I argue that you should start simple with functions and IF you see a use case for classes use them.
To be able to make a reasonable choice, you must learn OOP at least the OOP machinery of your language.
A few hints:
a) Python's abc or Java Interfaces serves as template to implement multiple times the same behavior in different contexts
b) If you don't inhirit the class, the class is useless, except if you use the class as an interface (see a)
c) A class is more complex than a function ie. more guns to shot yourself in the foot.
d) think function pointer / first-class function. Many times, you can avoid a class by passing a function as parameter e.g. filter.
e) think function factory ie. a function that returns a function.
f) think pure function / side effect free functions. This is a BIG life savior. A function without side effects is much simpler to test that a function with side effects. Of course, you can not avoid all side effects (io et al.) or mutations, so build your abstraction so that the side-effects / mutation to happen in a well known place.
g) keep functions small
Also, Rust, while allowing the assigning of behavior to a type, is explicitly not OOP. Check it's Wikipedia page (which doesn't include OOP in the list of paradigms) and the O'Reilly book (which says Rust isn't OOP).
Personally, I think the idea that code isn't OOP just because you don't use inheritance to be utterly ridiculous.
Personally, I believe the two big OOP definitions are "Java OOP" and "Smalltalk OOP", and Rust fits neither. People coming from heavy OOP backgrounds really struggle with Rust for this reason. It's also why this chapter was the hardest one to write.
Java: https://docs.oracle.com/javase/tutorial/java/concepts/ (Sometimes phrased as "Encapsulation, Inheritance, Polymorphism, Abstraction")
Rust can define behavior on any type of data, including scalar values. There is no concept of a "class" and constructors and simple functions that return instantiated values. They are not automatically called when a value is instantiated. Traits are more like interfaces, they constrain an implementer. A value has no way of inheriting behavior. Visibility is at the package level, and Rust offers no notion of encapsulation inside of a package.
Rust is, by the trending definition of OOP, not OOP.
Don't do that, then. "OOP" is a term-of-art in an academic discipline. It means exactly what it was used to mean by the people who coined the term in the papers they coined it in.
The thing that schools teach under the name "OOP" is a https://en.wikipedia.org/wiki/Lie-to-children intended to introduce something vaguely like OOP, not to introduce OOP itself.
And, before you ask: no, there is no academic jargon term for "the thing that C++ and Java are." From an academic perspective, neither language has any particular unifying semantics. They're both just C with more syntax.
OOP is a different set of semantics, based around closures (objects) with mutable free variables (encapsulated state), where an object's behavior is part of its state.
C++ and Java can simulate this—you might call this the Strategy pattern—but if you build a whole system out of these, then you've effectively built an Abstract Machine for an actually Object-Oriented language, one that ends up being rather incompatible with the runtime it's sitting on top of.
The fact that "OOP" is no longer used to point to what Alan Kay originally meant is immaterial. It's a shame, but that train has passed.
OOP is not a word, though. It's a jargon term. You can't redefine those. They mean what they originally meant, because if they don't, then you lose the ability to understand what the people doing real work using the word by its proper definition are doing. Jargon terms don't drift.
To be clear, I'm mostly talking about the same thing that is true of the term "Begging the question." Laymen can use it to mean whatever they want—and I don't begrudge them that, it's a phrase in a language and people will do what they like with it. But that lay-usage will never change what the phrase means in the context of formal deductive reasoning.
Likewise, programmers can use "objects" and "OOP" to mean whatever they want it to mean—but when having a formal academic discussion about programming language theory, an "object" refers to a specific thing (that came about in LISP at MIT before even Kay; Kay just was the first to write down his observations of the properties of "objects") and "OOP" refers to programming that focuses on such "objects" (as implemented in Smalltalk, CLOS, POSIX processes, Erlang processes, bacteria sharing plasmids, or computers on a network; but not by C++ or Java class-instances.)
I don't think we disagree, here; you're arguing that the lay-usage is X, and the lay-usage is X. I'm just pointing out that the lay-usage is irrelevant in the context of a discussion that requires formal academic analysis of the concept.
Jargon terms are a subset of words.
> You can't redefine those.
You can. Anyone that's been around computing knows that “functional programming” (and even “imperative programming”, which at one point contrasted with structured programming) have drifted
> Jargon terms don't drift.
Jargon terms absolutely drift (and get overloaded) for the same reason as other terms do,the difference is the community of use in which those factors which drive drift/overloading operate.
> To be clear, I'm mostly talking about the same thing that is true of the term "Begging the question." Laymen can use it to mean whatever they want—and I don't begrudge them that, it's a phrase in a language and people will do what they like with it. But that lay-usage will never change what the phrase means in the context of formal deductive reasoning.
Mostly aside, but the popular alternative usage of that phrase is transitive verb phrase, and the older usage is an intransitive verb phrase (which, while this is clearly reversing the etymology, can be viewed as a special case of the transitive form with a particular direct object assumed) so the two neither conflict nor are incompatible. So, it's kind of a bad example of anything other than reflexive pedantry; accepting the alternative usage in formal circles wouldn't be drift or overloading because it is structurally distinct.
That's a fundamental property of type erasure, which is not exclusive to OOP, and far from the extent of what modern OOP is about
This is not uncommon especially in the Bay Area.
You know how well that ended... ;)
... oh, and watch out for Go! ducks
Closure: or, simply use this monstrous component pattern when 70% of your code is boilerplate or hack into namespaces and override functions in them in runtime. And don’t forget to do it in right order!
Recent example - I was researching how to mock calls to functions in packages in go... Well, the best thing you can do is to have package private variable, assign function to it and use it throughout the code so you can swap it with mock/stub in a test.
There is none of that bs when I write Java or C#. I have a mechanism to decouple contracts from implementations - interfaces. I have mechanism to supply dependencies to modules - it’s called constructor parameters. I can replace implementations easily with mocks or stubs in tests without target even noticing that.
Can somebody provide me with an example of this kind of decoupling achieved in other paradigms _without_ hacking the runtime of a language or ugly tricks like in go case?
If you want testable code, the first step is to separate computations from effects. Most of your program should be immutable. Ideally you'd have a mostly functional core, used by an imperative shell.
Now to test a function, you just give it inputs, and check the outputs. Simple as that.
Oh you're worried that your function might use some other function, and you still want to test it in isolation? I said give it up. Instead, test that other function first, and when you're confident it's bug free (at least for the relevant use cases), then test your first function.
And in the rare cases where you still need to inject dependencies, remember that most languages support some form of currying.
For small programs with few interfaces to worry about, that might be fine, but as you number of users go up, the odds go up that you'll be ensuring rollback bits get flipped when filesystems fail. None of that is simple to test without some sort of dependency injection or other heavyweight design pattern.
In OOP, type signatures tend to lie. Not so true for FP.
I one was imagining a framework for Closure that would do something like that:
- you define a bunch of functions. Some of this functions depend on other of those functions.
- for dependent ones you define parameters and tag them somehow
- you call this functions without mentioning those tagged parameters
- during the startup the framework takes over and does partial application, generating functions with same names but without tagged parameters, so you kinda have your dependencies injected.
I don’t know if this sounds too crazy or too incorrect for Closure’s paradigm.
Adding a framework is a premature abstraction that is likely a sign that you've got your types wrong.
It is also a natural fit for “OOP”, or rather: classes. In a sense, it turns non-OO code into OO simply by storing the interfaces implementing the dependencies required by your code.
If that’s your only state, are you still OO? I’d say no. (Not as it’s commonly understood: mutable state.)
Hence, my proposed solution is: use classes & DI, but avoid (mutable) state (i.e. non dependencies).
Curious to hear others’ thoughts on this, though.
I will explicitly exclude legacy code from my previous statement, where globals, static calls with side effects, stateful objects and inheritance abuse were common.
Now if only I had all the advanced type system features :/.
Sometimes it feels Php devs have an inferiority complex and want to use as many design patterns as possible to feel like a real engineer. Two years ago suddenly everyone wants to add DDD or hexagonal design on top of the framework. Bye bye KISS, hello boilerplate everywhere.
By relying on make for the heavy lifting the tests are completely incremental, only tests that depends on modified files get run. It also doesn't pollute the run time code with layers of indirection.
Hot take: RAII has basically taken over everything else as far as structural design foundation goes. Type erasure and encapsulation still play a role, but it's not nearly as fundamental anymore.
All you can really do is try to be knowledgeable about the methodologies, use the right tool for the right job, and try to keep things as simple as possible. And don’t subscribe to anybody’s dogma.
Now all we need to do is get the field to agree on universally applicable definitions of "right tool" and "simple", and never change any requirement after any technical decisions have been made, and we'll be all set!
I had a manager who used the term "ice cream" for phrases like this that sound good (everybody loves it!) but don't help drive any useful conversations or decisions. Should we use the right tool for the job, or the wrong one? Let's use the right one! OK, are we all agreed? Great! It's unanimous. Next issue.
Unfortunately, the 5 people sitting around the table each have a completely different conception of what this means, so we're no closer to a decision than when we started. It's simply not a useful guide or metric. I think it's mostly code for "be quiet and do as I say".
Do you want a checklist for this kind of thing? You’re not going to get one. You have to use your own judgement.
Possibly you’ve fallen victim to being on teams where ego dominates, and members refuse to seek the best option unless they came up with it themselves.
If a class does two broad things simultaneously then inheritance can work great. For example, a User class that inherits from a DB mapper class. I don't want to have to tell my class how to write a record to the DB. All that code can be centralized into one thing and then relied upon for its uniformity across all my models.
This isn't true the way most people use inheritance though. They do things like Sword inherits from Weapon and Weapon inherits from Item. But this just asks for trouble because as requirements get more complex there are more edge cases and the complexity bubbles up into overriding the inherited methods, which makes them less reliable from different calling contexts, or flipping the OO script and pushing class-based-if-statements in the ancestor class.
Then you step back and say "why did we make Sword a weapon in the first place?" and the answer was we had logic somewhere else in the code that did things like check if a user was armed. Well we don't need inheritance for that at all. We can use plain old methods and properties / duck typing.
It doesn't really matter because both of these things can be better solved using composition + traits/interfaces anyways.
And in defense of inheritance, the Liskov Substitution Principle is extremely useful and makes a lot of sense.
If a function accepts a `Weapon` as parameter, surely you should be able to pass it a `Sword`.
"Sword" instances may have specific properties that make them incompatible with a "weapon" superclass. Is a broken sword still a weapon? How about a sword with no handle? Or a sword that has been magically transformed into a flower? Is that still a sword and a weapon, or is it now "really" a flower, and should inherit all flower methods while throwing away all weapon methods?
You can duck type and/or RTTI your way out of this problem, but you can't avoid the fact that traditional OOP is very bad at handling these "it depends" cases, because the only relationship it supports is a strict and static inheritance hierarchy.
Unfortunately many domains, including natural language, can only be described by mutable context-dependent relationships.
In the real world, swords don't turn into flowers, so you might think you're safe. (Except that you might want to include object mutability in a game...)
But in NL the meaning of a phrase can change according to social setting, unstated subtext, speaker gender, age, and even time of day. It's all context, and it can't be ignored without losing essential detail.
All current typing systems seem to be attempts to enforce limited-scope static relationships between opaque atomic objects with more or less static properties.
Mutable context-dependent relationships are everyone's worst nightmare in CS. Academic CS seems to have spent most of its career trying to pretend they don't exist, or if they do, to make them go away.
This is sold on the basis of making more reliable code. In fact it simply doesn't work elegantly for entire classes of problems, including many problems for which it seems to work just fine if you describe them in words, until you have to think about all the possible details.
The worst case output is intolerant brittle code with limited features, and the best is an encrustation of exceptions, edge cases, and work-arounds.
Note I'm not saying there's a simple answer, because there isn't. This is a research-grade problem, and it's barely been considered.
I am saying - beware of simple principles like LSP that claim to solve this problem. Because there are many situations in which they simply don't.
In contrast, non class based languages such as Haskell struggle to model problems that are trivial in OOP, such as how to reuse 90% of existing functionality but override 10% with more specialized behavior. Good luck solving that problem elegantly with type classes.
That always seems to the problem with deep hierarchies after a while. Suddenly you have something where one of the inherited methods shouldn't be there. You can't take it away so you have to do something clunky like throwing an exception or making it empty.
The funny thing is that languages like C# and Java abandoned mixins but kept deep inheritance hierarchies. When I did C++ more we used multiple inheritance a lot but with only one or two layers deep. This worked extremely well and now that that I am doing more in C# I miss it a lot.
You don’t need inheritance to achieve this.
Your User class contains data and probably some business logic. The fact that it’s going to be persisted in a database at some point is a detail that the class shouldn’t know.
Consider using the data mapper pattern (or an ORM implementing this pattern). In this pattern, your User object doesn’t even know it will be persisted. It doesn’t have any parent class. You just manipulate your entities like a graph of plain objects, and when you have finished you ask the data mapper to persist them. In my experience this is much better than what you are describing, which look like the Active Record pattern.
> If a class does two broad things
Doing this is a violation of one very commonly accepted principle: The single responsibility principle.
Myself, I wouldn't even put any business logic in the User object, except maybe representational logic (e.g. have a couple of helper methods that return multiple representation of the same piece of data).
One of the best things to do is separate behavior from data in the first place, which feels kind anti-OO philosophy, but is definitely easier to reason about and work with.
Inheritance is very useful especially in game programming (in context of OP), and hardly ever anywhere else. In games it's useful to be able to pass around references to an object or have systems that manage pools of objects at exactly the hierarchy of inheritance that makes the most sense for the manager of that system. Not many other applications need to pass around objects between various hierarchical management systems as much.
I think inheritance shouldn't be taught in intro-level programming classes. It's presented as "how to do all things in OOP, shove all the things into multi-level child classes" when really it should be "here's an advanced concept for situational architecture optimization". It defeats the purpose when you do multiple levels of inheritance and no longer ever use the parent classes for anything.
This is because OOP is good at modelling objects and games are modelling objects.
But let's be serious for a minute here: even regular inheritance has hardly ever led to the kind of nuclear disaster than most pundits claim.
It makes your code base a bit more unwieldy and a bit harder to evolve, but it's really not the end of the world.
This sentence does not make sense.
If you want to have a class composed of two other classes, whose behaviour is defined at run-time - say, a generic "Engine" object which is composed of a "GraphicsRenderer" and an "AudioRenderer" where the first one can be a D3D renderer or an OpenGL renderer and the second can be an XAudio or OpenAL renderer, you have to have inheritance somewhere, because you have to store at least one function pointer at some point. std::function in C++ is implemented with inheritance. Rust traits are based on inheritance. However you look at it, you can never completely "get rid" of inheritance if you want to hide both code and data behind a single pointer. Hell, even the linux kernel written entirely in C reimplements vtables by hand for device drivers because it makes so much sense.
The consensus is to use inheritance of interfaces and use composition to implement the interfaces.
Hence my phrasing: "Inheritance is better achieved through composition".
A consensus by the "maximization of boilerplate" rule.
Inheritance is the most powerful tool available at the OOP land. It's the one thing that FP languages still didn't replace with enough added advantages to make the OOP stuff look like a toy. So if you are programming in OOP while avoiding inheritance, you would be certainly better in another paradigm.
(And, of course, powerful tools are easy to misuse. That's no reason for forbidding them.)
Assuming this is the case: why do you disagree and what do you think is the cause of this difference in opinion?
Also, C++ has abstract classes with only pure virtual functions, which accomplish the same thing.
- you have to at least implement the destructor if you are going to use the abstract class across multiple DLLs - else, dynamic_cast won't work since each DLL may have its own type_info object for the abstract class
- even if a method is marked abstract, it can still have a default implementation: https://gcc.godbolt.org/z/bAE7v6 though other prominent OO languages are slowly catching up with this ;)
Look how I abused inheritance in this assembler (for a Lisp virtual machine):
The object system is used to define opcodes. Methods on these objects (which get instantiated as singletons) then handle assembling and disassembling.
There is a macro defopcode-derived which defines an opcode similar to another one, using inheritance.
There is no reason for the relationship to go one way or the other; it's just "this thing is like that thing, except for this slight difference". The inheritance could basically go in either direction.
Note that defopcode-derived doesn't even have provision in its syntax to specify behavior; the code is 100% re-used. The only thing different about the derived opcode is the instruction mnemonic and the opcode bits. Inheritance is used just to override a number and symbol which are static slots.
To wit, implementation inheritance gives you a sum type (e.g. a Shape is either a Circle, or a Square, or…) whereas interface inheritance gives you a product type (class Foo : IBar, IBaz means that Foo is an IBar and an IBaz).
Classes are very well suited to statically formalize the communication points (aka "interfaces", "protocols") between the various parts of your program.
The fact that a concrete class might have a state is actually irrelevant (indeed, we generally make all of our data members "private").
All of these principles have very little basis or formal definition. Bertrand Meyer did make some headway with Eiffel. But the type systems are so weak and the lack of formal semantics makes all of these discussions a bit of hand waving and bike shedding.
At least the DOD folks have some guiding philosophy and are trying to optimize the design of programs to account for memory latency of modern hardware architectures.
The OOP defenders are basing their argument on hot air and hand waving.
There are more interesting languages these days with better designs. Ones that are based on better theories in my opinion.
OOP will be around for a long time if only because it has so many adherents and people will be stubborn to change if history has anything to say about it.
This argument drives me nuts. A simple OOP environment can be quite easy to grasp and (mis)use. Following all the golden rules and principles that SOLID et al call for, are comparabily hard to internalize. A rookie can only fail and has to collect, choose and study all the wisdom over the years until he becomes a master. To that point he will create OO code that will break untold rules.
Ruling the majority of OO code out there as "bad" is hubris and ignores reality. It is easy to do OOP wrong and hard to get it right. This imbalance is proof enough for me that we don't know what we are doing, just justifing. (Yes this may apply to other paradigms as well)
It wasn't the inherent nature of OOP that caused that terrible style to become dominant. It's a style that was actively taught and promoted since at least the mid-1990s. It was taught, and taken for granted, that objects were containers for mutable values. Deep inheritance hierarchies were taught as the norm, not the exception. Java was built around this model and then became one of the most popular programming languages in industry, and learning materials for Java reinforced the style. Everyone interviewing for a Java programming position from the late 1990s through the mid 2000s had to learn special jargon related to this style and regurgitate it in interviews. We're suffering through a hangover from decades of this horrible version of OOP being promoted as the "right" way to write software in industry and academia.
Of SOLID, S, O, I and D are rehashing structured programming design principles. LSP is peculiar to it, and not an unreasonable way of thinking of objects, but I don't see people struggling to figure out how to come up with workable class hierarchies.
The point of objects was that they were intuitive and easy, and where it tends to fall apart is in the details. It's often small tasks like writing an equality operator correctly that are absurdly complicated. And while we can construct reasonable class hierarchies, the interaction becomes a bear and the bugs are subtle and confusing.
What I see in OOP programming is that people avoid various idioms or patch around them because they don't trust their tools.
I think the problem with most OOP languages is that some high-level concepts like inheritance were constrained by very low-level implementations, and they often tried to glom several ideas together.
There's no "object algebra" even 40 years in. In C-like languages, objects are using a "struct and vtable in the heap" model. In dynamic languages, they're using the "type instance and a hashtable in the heap" model. Then they typically declare that a value is really a variable, unless it's an atom, and often other weird asymmetries like "the bottom type actually does have a value which is 'null'" and, of course, whatever weirdness they pick up such as floating point.
Those constructs are then overused; this problem is especially apparent in Java where "everything is an object" means that your class becomes your tuple type, and if you want to combine two tuple types you're going to do that via inheritance. In most of them, you don't have a proper discriminated union, so all the stuff you'd do with sum and product types you now have to shoehorn into classes whether it makes sense or not.
It all sort of works, but the reason OOP languages keep adopting non-OOP features is that it doesn't work very well.
Casey Muratori (Handmade Hero) on why OOP is bad and how to get rid of that mindset - https://youtu.be/GKYCA3UsmrU?t=4m50s
Mike Acton (Engine Director @ Insomniac Games) - https://www.youtube.com/watch?v=rX0ItVEVjHc
Plane into the side of the mountain, no survivors, call off the search. Which is when I really started to learn (around '96/97). And now I love it.
Until I come across people who always start by defining an interface first and then think about what might follow. And dependency injection. Holy priceless collection of Etruscan snoods DI makes me want to gouge out my eyeballs with a rusty cork screw.
Like I said though I love it and today my happy land is about 80% OOP and 20% everything else.
When I first fell in love with OO there was a time that I felt that there wasn't a problem in the world I couldn't solve using it. That moment it finally "clicked", and all the lights came on? Wow! One of the best intellectual experiences of my life. It changed everything.
I was right about being able to do anything with it. What I was wrong about was how some things are more difficult than others. Rules-based systems, for instance, get abstracted away into a sort of gobbledygook in OO. Scripting is uglier than it should be.
Then I had a similar realization about FP. It took much, much longer and there was no dramatic moment, but the impact was just as huge. It was a new way of thinking about and solving problems. It has it's own edge cases and antipatterns, of course, just like OO.
I wonder what the next A-Ha! change will be? TLA+? Is there a system of thinking around morals and values as it applies to problem solving, as Kant and others thought? Beats me. I hope I get to find out.
The only time this approach is really a problem is when doing slow calls such as I/O - eg. writing to the database. In such cases you can easily switch to an in-memory database or use a factory pattern (which is kind of like a hand-rolled mock).
It's true that an error in say Component C may generate many failures in test of for other components, but in practice that's not really a problem. Just pick one of the test failures and drill down until you find the root cause - fixing that cause will magically fix the other 55 test failures.
Unity's traditional entity system is suffering from a number of "OOP-isms" which make it hard/impossible to optimize for performance.
The new ECS strictly follows a Data-Oriented-Design approach, where everything is built around laying out the data in memory in a CPU-cache friendly way (and a few other things that neatly 'fall into place', like spreading work across CPU cores, a specialized 'high-performance' C# dialect, and the ability to move work from the CPU to the GPU).
The big question is how the traditional Unity audience will react, since the ECS programming model is quite a bit different from the old way, and it's no longer as simple to build a game from a jenga-tower of adhoc-hacks ;)
However, it's important to note that Unity is adding an ECS, not inventing the concept.
It's been well received so far (for example: https://forum.unity.com/threads/ecs-or-why-should-i-bother-m... ).
There's also a good incentive to adopt ECS for certain games; Unity is offering a hugely reduced runtime size if you follow the ECS pattern, which is much needed for web games and other “interactive experiences” (ads, mobile game demos). The “ECS for Small Things” presentation at GDC is worth a look for those interested:
... which had some discussion earlier  and the top comment on that submission links to the article of this submission.
What they are referring to as ECS is more than just composition over inheritance. There are entity component architectures that are just that--Unity's existing class based component architecture for example.
But the ECS in this case refers to Data Oriented Design--basically laying out data like normalized database tables. Every component system has a table, and each row in the table is the data for one component (you don't actually use a relational DB for this, the data is just organized in a similar manner). The entity itself is just an id that all of the components reference.
The concept as used in games started in the late 90s as Structs of Arrays instead of Arrays of Structs.
Here's a a good free book on the subject (there's a paid hard copy as well) http://www.dataorienteddesign.com/dodmain/
The problem is how badly many schools teach OOP paradigms, and how many frameworks abuse a specific style of OOP.
The "pure OOP" style that is so roundly criticized is just one of C++'s possible styles. The whole article is effectively about how there are other ways to use C++. I actually tend to think that much of the style he is describing (composition based) is really just what the pure OOP people used to criticize as "C with objects".
And to further that point there is no more Safari on Windows for this reason among many others. Remember MemMaker? It’s crazy there were so few applications we managed some of the memory ourselves but it worked very well didn’t it? OOP really took off back then too and then memory utilities were not needed and didn’t last too long. The convenience of not worrying about it is one of the many things OOP was able to solve as it advanced. It is still pulling off the same tricks today in a much more complex and metered way. OOP does so much more than just this of course but the solutions developed with it for managing memory are intense and as much art as science. So we should question it and many paradigms to make this better. A mentor of mine when explaining this would compare it to juggling...and then proceed to actually start juggling while talking about his code. He’d stop and look up, just pause, and say that’s all we are doing here just juggling.
Do both. Teach how to use composition. Teach how to use the relational model. Don't avoid teaching either by using misleading sales tactics.
DOD explicitly advocates separating data from behavior, and is strongly opposed to OOP in general.
You could build each system as an object, but you wouldn't want to store the relevant data structures within that object because other systems will likely need to use those data structures as well.
The entire architecture is predicated on separating data and behavior. Yes you can build an ECS system using classes, but nothing about it fits into what you'd call OOP.
Like the Codd paper for Relational Algebra.
Instead, it starts from the other side: OOP is based on a few core concepts, and those concepts are critical to understanding OOP. The first concept is 'Everything is an object', the second might be 'Objects can receive messages'. So if you have some code:
1 + 2
You might wonder about the preachy tone, but in the beginning, it took me a few months before I understood that I was trying to understand OOP from the wrong side (at the time I was thinking in C) and I would like others to have a quicker start than I had.
I am reasonably fluent in Java and Python, and read a lot of C/C++. So I think I have a solid understanding what OO looks like, and what concepts are involved (dynamic dispatch, polymorphism, inheritance, etc). However, there is a lot of critique about the OO models in these languages and code that is found in the wild. What I am looking for is a principled treatment, that avoids these pitfalls.
Another good source of ebooks seems to be this page: http://stephane.ducasse.free.fr/FreeBooks/
There you can find the Blue-Book which looks like a great resource too:
Some Links seem to be broken nowadays, but some content can be obtained from the archive: https://web.archive.org/web/20130319110836/http://stephane.d...
But, just as SQL doesn't really implement Codd's relational algebra, so it is the case that most so-called OOP languages miss the mark vis-à-vis Alan Kay's original conception.
The analogy is buttressed by the fact that many early RDBMSs didn't even support joins (path independence being an essential characteristic of RA), just as many mainstream OOP languages didn't/don't idiomatically endow objects with a strong way to protect themselves (encapsulation being a necessary characteristic, thus pervasive use of setters being the main sin)). But, Kay did praise Erlang for getting OOP right.
He does have an account here on HN. https://news.ycombinator.com/threads?id=alankay1
I hope he doesn't get tired of explaining the same things over-and-over again. I have asked questions and he has answered, but I usually end up misinterpreting the ideas. :(
Funny story: That is at least the second account that he created on HN. He registered an earlier one just to reply to a comment that I had made.
This comment does a nice job explaining why:
I also like how in the interim Elixir addresses some of the issues surrounding gen_server verbosity.
Don't worry about the "in Ruby" part, it's just as good even if you know nothing of Ruby. I enjoyed it and have never written a line of Ruby in my life :)
In Python, especially. I'll find myself starting off all experiments or simple projects with functions and basic data types. As something evolves and I want some semantic clarity I'll stop using dicts and start using namedtuples. And then at some point I may replace the namedtuples with classes. From there I may discover value in having subclasses so I'll add a few (but this is exceedingly rare in my line of work).
The program is not good, the design is not well thought out and the implementation is not very clean. It is a design prototype that you use to write the real application. If only there were time for a fresh start. The solution that we have by that point is not perfect but it does three things: Actually solve the problem, solve it good enough for regular usage and it allows further modification with only minor pain.
I wish I could tell the full story but most software stories end before the grand finale.
There's a straight line from requirements to implementation; no meandering involved. At least that's the theory. In practice...
That may have been where Java wanted to go, but, when I'm working in Java, I don't feel like that's where I am. Ways of doing things in Java tend to be wildly inconsistent from project to project. Partially, I think, because so much core functionality in the Java ecosystem was allowed to be federated out to 3rd-party projects for so long. Take the long-standing popularity (and rivalry) of Guava and Apache Commons for handling even basic tasks that are hard to get done using the core Java APIs. If there's such a thing as a "platform smell", I'd say that certainly qualifies.
With Python, on the other hand, there is a fairly consistent common understanding of what "Pythonic" means, and, even when there really is more than one way to do it, the question of which one to use can usually be quickly resolved to a predictable outcome by simply pointing out that one option is the more Pythonic way to do things.
(edit: Though, to be fair, Java was first released into a world where languages like C, C++ and Common Lisp represented the status quo. Expectations were lower at the time.)
Even if I don't use Guava or Apache Commons myself, for example, I still occasionally run into dependency conflicts that I need to resolve with awful hacks like package relocation because so many other major libraries rely on one or the other, and neither library is a particularly great citizen about breaking changes.
It’s interesting - when you begin adding constraints, sometimes it helps solve the problem. You can’t be creative unless you’ve created a solution. It would be interesting to see the impact on this with programming languages.
Its then you realize you had better started with clay.
Do you not miss more advanced features, like multiple dispatch? Do you just implement it ad-hoc when you need it?
I don't know any way to say it that doesn't come off sounding condescending, but this looks like the Blub Paradox to me. Python's brand of OOP is better than most, but it still feels pretty limiting to me. It doesn't even offer syntactic abstraction to make it easy to work around. You have to hope (as the 'multimethod' package does) that other features accidentally allow you to.
No... I think this is missing precisely what was exactly the point of the article (see "(B)" in the text). This post is not a fight over OOP religion. The point of it is if you misunderstand or mischaracterize "nuances" about some idea (if these are actually all nuances, which I think is debatable) and propagate them, then you shield other people from prior (edit: or even current) literature in the area, and hence prevent them from understanding what the relevant techniques really are and how to use them properly at all. This makes them lose a potentially powerful tool in their toolset, which you should agree is an awful thing regardless of what coding 'religion' you follow.
Very few programmers know the prior art wrt. OOP, or have worked with the kind of code in which OOP is done well (I guess "OOD", using the terminology from the article). Instead, almost all junior (even senior) programmers I encounter parrot something along the lines of OOP being too enterpris-y and crufty, and something about inheritance being stupid. OOP is dismissed out of hand. It's high time for a correction in that mindset. The ability to structure your data and the operations on that data together in place is incredibly powerful, and OOP is a good approach to do that.
Schools are partly to blame. They teach OOP as if it is an exercise in abstracting some sort of reality (e.g. "a dog barks, a cat meows, and both walk"). But that approach falls apart for the sort of concepts programmers work with. OOP is at its core a way to structure code, and to do so cleanly, to avoid repetition, and to enable easy navigation through a program. It is not intended to be mental map of some external reality.
Agreed. In a lot of cases if you don't have objects (the good parts) you are doomed to reinvent them:
"Linux uses service abstractions in order to support multiple file systems. There are vtable-like structures
such as file operations that are used to dispatch
operations such as read to the code that implements
file reading in a particular driver."
Generics werent part of the earliest OOP because generics only make sense with static typing and the earliest OOP languages were dynamically typed; generics were around other places around the time of early OOP, though.
No, this not an example... generics did not "supercede" OOP.
When they were finally implemented in OOP, it superseded the original intentions of OOP.
The question is, are we now supposed to remove generics because they don't conform to the early literature of OOP?
Looks like the OOP edited his comment though, so my point is irrelevant.
> When they were finally implemented in OOP, it superseded the original intentions of OOP.
No, when they were implemented in static OOP, it brought static OOP closer to intentions of the original, dynamic, OOP, where generic-ness doesn't require parametric polymorphism.
So what? Early cars didn't have AC, therefore AC superceded cars? Or therefore I'm somehow arguing we should use AC instead of cars?
> The question is, are we now supposed to remove generics because they don't conform to the early literature of OOP?
Who ever claimed such a thing in the first place? It certainly wasn't me. If there is any question like this under discussion, it is whether OOP should should be removed because generics somehow superceded them (your idea), not the other way around. In either case the answer is clearly No because the idea is obviously ridiculous and not something anybody suggested.
> Looks like the OOP edited his comment though, so my point is irrelevant.
Not sure what this is referring to, but I haven't been able to agree with your comment since it was initially written.
I pointed out that just because the literature is original doesn't mean it can't be superseded. I gave generics as an example that was later implemented in OOP languages and fail to see how implementing these perverted the original intentions.
I was then told that OOP intentions were originally meant to be dynamically typed and that the static typing of generics was meant to put it towards the original intentions of this dynamic structure (this is untrue because Alan Kay just didn't like static typing, but didn't make dynamic typing a requirement for OOP). Upon further research, the earliest OOP concepts were explained in the 1960's and the first OO language (Simula) was statically typed and a superset of ALGOL 60 which was a language made in 1960, with Simula following in 1965. Smalltalk came in at 1972 (around the time of generics) and is considered the definitive OOP language which is dynamically typed.
So its hard to say, without direct sources, what the original intentions of OOP were, but considering OOP appeared before the first languages that contained generics (i.e. 1970's), generics is an idea that superseded OOP.
Considering the utility of generics, its clear that later concepts that were added to OOP did not somehow perverse the original literature.
So we've established the following:
1) The earliest OOP language (Simula) was statically typed.
2) Generics came in after Simula
3) The original intention of OOP could probably be attributed to Alan Kay, who created Smalltalk, but it borrowed heavily from Simula. And while Alan Kay coined the term OOP, the idea was not created in a vacuum as OOP concepts predate Smalltalk.
Hopefully this provides some clarification. But my guess is people will continue to misinterpret what I meant.
OOP the Eiffel, Sather, Smalltalk, C+@, BETA, CLOS, SELF way isn't the same thing as most people learn in school as THE OOP.
Just like there isn't a single way of doing FP or LP.
Also lets not forget that all successful FP/LP languages are actually multi-paradigm and also include OOP concepts.
Isn't he better off taking that crystal ball that gives him the foresight, using it to pick the correct lottery numbers and simply retiring?
So no, he's not better using that "crystal ball" to predict the lottery.
You need to foresee the scale of what you're building and how you would proceed to the next level. Some things must be solved right before the first deployment because you can never change them after the application is deployed. You must know what these things are and solve them right. You must also reduce their number ideally to zero if possible. You must use solutions that allow refactoring and later scaling in areas where your crystal ball is not sure. If the hard things are solved correctly you can use average workers to do the rest and it will work well.
Perhaps this is a function of me working in startups and consulting my whole career, but it seems extremely misguided, if not negligent for an experienced engineer trying to design for use cases five years in the future. Five months into the future is even pushing it.
What kind of companies operate in this way?
Even if a company fails, their products, processes (and code) usually get absorbed by some other company and need to be maintained - startups get acquihires that keep teams but discard products; "normal companies" get M&As that discard headcount but keep product lines, divisions and processes that require lots and lots of running code. The large companies often have multiple "inherited" codebases from all the other companies they have absorbed. And there is a lot of old code running; nothing is as permanent as temporary code - I have seen comments stating "this won't work properly on the boundary between fiscal years, but the system is scheduled to be replaced by then" that were made IIRC 6 years before I was looking at that system, so it obviously did not get replaced back then. In many industries a 10-year old company is a young company; heck, most of the current "internet startup unicorns" are 10, 20 or more years old; in established industries (you do know that the vast majority of software people work in non-software companies, right? most code is written for internal business needs, not sold as a service or product or consulting to others) there is a lot of mature code serving business processes that have been there for decades, will be there for decades, but often have some changes that require also code adaptations. The same goes for all the code that's inside industrial products - in the automotive industry, in home electronics industry, etc; you may have a new model of car every year, but most of the code in that car will be much older than that.
I mean, the trivial fact is that if we look across the whole industry, all the statistics show that the majority of programmer manpower is spent on maintenance. So the total costs of software are dominated by how easy it is to maintain it, and a lot of that comes from proper design that takes into account what the likely needs are going to be after a bunch of years.
In my experience, badly designed code tends to become a net loss after a couple of months because after that time someone is going to have to modify or fix it.
Bits are objects, too...
Structs are stack allocated and have no inheritance. But are otherwise syntactically work like classes.
Classes are heap allocated and allow single inheritance, unless the parent is an interface.
Interfaces are similar to a class but it's member functions must be overridden. A class can inherit from multiple functions.
I tend to use a mix of these and templates depending on the type of data i'm handling. I find it gives the best of whatever design pattern works well for different parts of a project without locking you into a certain paradigm throughout and still keeping everything fairly logical and coherent to read through and understand.
That is the big promise and the big lie of OOP. It, in fact, accomplishes the opposite.
The medium used across systems today is data, not objects. Your objects are not compatible with systems across the wire, they need to be converted to data (JSON, XML, ...). They're not compatible with your data base, they need to be converted to data (SQL, ...). And if you want to use other people's objects (say from a library) you first have to make a layer that translates them for your own objects, since objects from other systems won't directly fit the model of your own object system, they always need to be engineered in. And if the objects are encapsulating data that you actually need, but doesn't offer ways to get it (private methods), you often have to jump through hoops to get it.
Not to mention the fact that OOP often entails immutability which leads to problems while doing multithreaded processing.
Clearly the answer is to use a more data-oriented perspective and use a programming language focused around data. Clojure gets it right and that's what I use. It's all concise functional code that skips all that class creation OOP loves, instead operating directly on immutable data (numbers, strings, maps, vectors, sets).
I recommend watching some talks by Rich Hickey (the guy who made Clojure). They're almost all excellent.
Well I wonder all those reusable libraries that I'm using all the time such as boost, Qt, POCO, openframeworks, etc... come from then. Am I dreaming them ?
> The medium used across systems today is data, not objects. Your objects are not compatible with systems across the wire, they need to be converted to data (JSON, XML, ...). They're not compatible with your data base, they need to be converted to data (SQL, ...)
not all code on earth is your average server app that communicates with a DB and sends JSON to the internet. I don't think I have even one installed program working like this on my computers. However I have an office suite, a lot of GUI apps, media authoring software, music player, web browser, mail client.. and they are all built with OOP languages - C++ being the one used for the immense majority - and OOP patterns.
> And if you want to use other people's objects (say from a library) you first have to make a layer that translates them from your own objects
Funny, I've been using .NET framework objects without any type of translation. And the point is to use inheritance to mitigate the translation.
Don't blame the model for its poor use. If there's no way to get the data you need, then that means the class was designed so you didn't actually need it or there are other ways to get it (i.e. through an interface).
>Clojure gets it right and that's what I use.
Except Clojure uses OOP principles and even admits to saying it uses immutable objects in the form of interfaces. Interfaces are essentially stripped down abstract classes. How this is not a subset of OOP, I don't know.
I'm sure Clojure is great, but can you write interactive applications with it without integrating OOP libraries?
Except for the fact that there's usually a mismatch between how your database handles data and how you want to get them back into objects. Cue: Object-relational impedance mismatch.
And while your objects become bigger and bigger and your domain more complicated, you end up relying on ORMs who keep re-creating those objects from the database with every transaction and are loading lots of data no one requested.
And then you are wondering why your stuff doesn't scale.
> Interfaces are essentially stripped down abstract classes. How this is not a subset of OOP, I don't know.
It's not. Interfaces have been around way before OO entered the field.
That was actually sort of my point. You need all of this extra stuff _because_ your code is all objects. Data doesn't get serialised, data just gets sent and then it gets received. Why should you spend time serialising and de-serialising an object, when you can just send your map data structure directly? Maps can be represented 1:1 as e.g. JSON. Any JSON data is basically a big map data structure. It's one function call instead of hours of writing ORM classes or custom serialisation methods just to send some data over a wire.
> Except Clojure uses OOP principles and even admits to saying it uses immutable objects in the form of interfaces. Interfaces are essentially stripped down abstract classes. How this is not a subset of OOP, I don't know.
You don't use interfaces in Clojure, you tend to use multi-methods for most purposes where an interface is needed.
> I'm sure Clojure is great, but can you write interactive applications with it without integrating OOP libraries?
Well, yeah? Why wouldn't you be able to?
Again, you don't write the ORM classes, the framework does it for you.
And what you advocate is essentially sending a table over the network. So, what happens if your data within that map is complex? Are you suggesting to send every piece of a complex data type over the wire in separate chunks? If so, how do you relate it in the application? You still to make some sense of that JSON data in your application. Having it in a big map structure is akin to a god object.
I my mind all you're doing is masking objects in different concepts just because you don't like using classes.
I'm not sure what you mean by complex data - data is data, and using a serial format like edn allows you to encode a lot of different stuff as data - even functions. I think you're stuck in the oo mode where you're passing around objects and classes instead of just data. Data is so much easier to deal with!
OOP-based languages are certainly more common, but definitely not the only game in town. Clojure, Erlang, Lisp, Perl 4, Forth, Fortran, Haskell...