
OOP Is Dead, Long Live OOP - starbugs
https://www.gamedev.net/blogs/entry/2265481-oop-is-dead-long-live-oop/
======
ilovecaching
Becoming a professional Haskell and Erlang developer really shifted my view on
OOP (let OOP denote class based OOP as found in Java or C++). In my view, OOP
is prove a poor model for computation, and the result has been that OO code is
almost always significantly more complex and error prone than an equivalent
computation written in a concurrent, functional, or structured paradigm.
Recent trends in language design (see Rust, Go, Elixir) also seem to be
abandoning OOP in favor of other models as well.

OOP provides competing ways of abstracting behavior that in Haskell we can
model with type parameters and constraints. Objects are not truly encapsulated
in the way Erlang processes are and a poor fit for SMP. Objects also
pathologically hide data in attempt to manage mutability, making it impossible
to reason about the memory layout of the program.

All in all, OOP is a toolkit for building bad abstractions: abstractions that
do not easily model computation, that hide data, and has tended to create
overly complex solutions to problems that are often full of errors that a
language focused more on type expressivity could catch at compile time.

~~~
westoncb
For some reason 'arguments' against OOP seem to follow a common pattern. You
have said many things against OOP, but you haven't actually presented an
_argument_ for why it's bad. I'll present each of your assertions here
individually to clarify.

> OOP is prove a poor model for computation

> OO code is almost always significantly more complex and error prone than an
> equivalent computation written in a concurrent, functional, or structured
> paradigm.

These claims may or may not be true, but they aren't very useful at all if you
don't provide a justification as well as merely asserting them.

> Recent trends in language design (see Rust, Go, Elixir) also seem to be
> abandoning OOP in favor of other models

There are a number of ways to account for these trends besides OOP being an
intrinsically bad model. The most obvious one is that there are fashions in
language design, and right now OOP is not fashionable. We already know that;
it's not a strong argument against it. Ironically, many in the OOP opposition
use the _same_ argument to justify OOP every getting popular in the first
place: "it was just fashionable."

> OOP provides competing ways of abstracting behavior that in Haskell we can
> model with type parameters and constraints.

> Objects are not truly encapsulated in the way Erlang processes are and a
> poor fit for SMP.

You have pointed out here that more classical OOP languages do things
differently from Haskell and Erlang. This should be expected and is not an
argument against those OOP languages. (Yes, you could say, "Erlang is better
at concurrency" because of the way in which it's different—but my
understanding is it's pretty well accepted that Erlang is sort of freak of
nature here, so it's not a good argument against OOP generally.)

> Objects also pathologically hide data in attempt to manage mutability,
> making it impossible to reason about the memory layout of the program.

They do hide data, but the 'pathologically' is something you've added on your
own. There is a design philosophy in which this data hiding plays an
important, positive role. When you say, "making it impossible to reason about
the memory layout of the program." —this sounds to me like missing the point
of that design philosophy: the purpose (and oftentimes tradeoff) of higher-
level languages is that you don't need to personally manage these details. I
think it's largely an application-dependent thing: you many be writing code
that requires that, but not all interesting software hinges on low-level
performance tuning.

> OOP is a toolkit for building bad abstractions:

> ... abstractions that do not easily model computation

> ... that hide data

> ... and has tended to create overly complex solutions to problems that are
> often full of errors that a language focused more on type expressivity could
> catch at compile time.

Another collection of unjustified assertions, except the 'hide data' part
which I accounted for earlier.

So across ~10 negative assertions about OOP you have 3 quasi-justifications:
newer languages aren't using OOP as much, hiding data is bad, and Erlang is
better for SMP.

~~~
loup-vaillant
The pattern you talk about is mainly a product of not wanting to squeeze a
whole essay into an HN comment.

I also suspect the problems with OOP are hard to communicate. I for one always
had a problem with OOP, but I could never quite point it out. Sure, when faced
with an OOP design, I could almost always find simplifications. But maybe I
never saw the good designs? Maybe this was OOP done wrong?

I do have reasons to think OOP not the way (no sum types, cumbersome support
for behavioural parameters, and above all an unreasonable encouragement of
mutability), but then I have to justify why those points are important, and
why they even _apply_ to OOP (it's kind of a moving target). Overall, all I'm
left with is a sense of uneasiness and distrust towards OOP.

~~~
westoncb
> The pattern you talk about is mainly a product of not wanting to squeeze a
> whole essay into an HN comment.

That may very well apply to the GP's comment—but, my observation of the
pattern is derived from a mix of mini-essay comments, and articles people are
writing on Medium or their blogs or whatever, where the space constraints
aren't so tight.

There are a couple things you'll regularly find: laughably bad straw-men (GP
is free of these), overly vague statements that only survive scrutiny because
of their vagueness (e.g. when the GP says OOP produces "abstractions that do
not easily model computation"), and unjustified claims.

The net effect is something that sounds bad, but if looked at closely carries
very little force.

I suspect the reasons for it are:

1) Actually evaluating a language paradigm is more difficult than these folks
suspect. Their view matches their experience and they assume their experience
is more global than it really is. Additionally, we don't have a mature
theoretical framework for making the comparisons.

2) People are arguing for personal reasons. They have committed themselves to
some paradigm and they want to feel secure in their justification for doing
so.

~~~
Twisol
> Additionally, we don't have a mature theoretical framework for making the
> comparisons.

This is really the problem. As much as I have strong opinions and beliefs
about how to architect code, every argument I come up with boils down to some
flavor of "I like it better this way". Which is true -- I _do_ like it better
this way -- but hardly actionable, and it doesn't get at the essence of _why_
I like it better.

The problem with making everything an object -- or more precisely, having lots
of mutable objects in an object space with a complex dependency graph -- is
that it becomes very hard to model both how the program state changes over
time and what causes the program state to change in the first place. I think
the prevailing OOP fashion is to cut objects and methods apart into
ridiculously small pieces, which takes encapsulation and loose coupling to an
extreme. This gives rise to the popular quip, "In an OOP program, everything
happens somewhere else." I can't think straight in this kind of setting.

I believe that mutable state should be both minimized and _aggregated_. As
much as is humanly possible, immutable values should be used to mediate
interactions between units of code (be those functional or OO units), and
mutation should occur at shallow points in the call stack. Objects can work
well for encapsulating this mutable state, but _within_ the scope of an
object, mutation should be minimized and functional styles preferred.

Using a functional style doesn't mean giving up on loose coupling or
implementation hiding. Rust, Haskell, and plenty of other languages support
these same concepts in the form of parametric polymorphism, e.g. traits or
typeclasses. It _does_ mean giving up on the idea that you can mutate state
whenever it's convenient. Instead, you have to return a representation of the
action you'd like to take, and let the imperative shell perform that action.

Speaking of imperative shells and functional cores, Gary Bernhardt's talk
called "Boundaries" is an excellent overview of this kind of architecture [1].
There was also a thread here on HN about similar principles [2].

[1]
[https://www.destroyallsoftware.com/talks/boundaries](https://www.destroyallsoftware.com/talks/boundaries)

[2]
[https://news.ycombinator.com/item?id=18043058](https://news.ycombinator.com/item?id=18043058)

~~~
westoncb
That makes a lot of sense to me. Looking forward to checking out "Boundaries".

Btw, one other idea I've had on the subject is that the problems with mutable
state can be mitigated if we were able to more easily see/comprehend the state
as it's being modified by a program; without that capability the only recourse
we're left with is our imagination, which of course is woefully inadequate for
the task. You can see more concretely what I'm talking about in my project
here (video):
[http://symbolflux.com/projects/avd](http://symbolflux.com/projects/avd)

From what I've seen, structuring a program to not modify state is almost
always more difficult than the alternative[0]. There are certain problems
where this difficulty is justified (because of, e.g., reliability demands);
but I think most problems in programming are not those, and if we could just
mitigate the error-proneness of state mutation, that may leave us at a good
middle ground.

[0] The exception is when you're in a problem domain that can naturally be
dealt with via pure functions, where you're essentially just mapping data in
one format to another (i.e. no complex interaction aspects).

~~~
Twisol
Oh, that's very cool! I had a similar idea years ago, but I didn't have the
technical chops to pursue it at the time, and I ended up losing interest. I
think this would actually be even _more_ useful in the kind of architecture
I'm describing, since the accumulated state has a richer structure, and many
of the smaller bits of state that would be separate objects are put into a
larger context.

> From what I've seen, structuring a program to not modify state is almost
> always more difficult than the alternative

You're not wrong! I don't think we should get rid of mutable state, but I _do_
think we should be much more cognizant of how we use it. Mutation is one of
the most powerful tools in our toolbox.

I've found that keeping a separation between "computing a new value" and
"modifying state" has a clarifying effect on code: you can more easily test
it, more easily understand how to use it, and also more easily _reuse_ it. My
personal experience is that I can more easily reason locally about code in
this style -- I don't need to mentally keep track of a huge list of concepts.
(I recall another quip, about asking for a monkey and getting the whole
jungle.)

There is a large web app at my workplace that is written in this style, and it
is one of the most pleasant codebases I've ever been dropped into.

~~~
westoncb
Interestingly, I think I built that project with an architecture somewhat
reminiscent of the 'boundaries' concept (still just surmising at this point).
It's a super simple framework with two types of things 'Domains' and
'Converters'. Domains are somewhat similar to a package... but with the
boundaries actually enforced, so that you have to explicitly push or pull data
through Converters to other Domains; Converters should just transform the
format from one Domain to that of another (they are queue-based; also
sometimes no translation is necessary).

I'll quote from the readme:

> This Domain/Converter framework is a way of being explicit about where the
> boundaries in your code are for a section using one ‘vocabulary,’ as well as
> a way of sequestering the translation activities that sit at the interface
> of two such demarcated regions.

Inside each Domain I imagine something like an algebra... a set of core data
structures and operations on them.

But yeah, I have very frequently thought about visualizing its behavior while
working on that visualizer :D

Is your research related to programming languages?

Also I'm going to have to think about "computing a new value" vs. "modifying
state" —not sure I quite get it...

~~~
Twisol
> Is your research related to programming languages?

Yep: I just finished a Master's degree with a focus on programming language
semantics and analysis. I'm interested in all kinds of static analyses and
type systems -- preferably things we as humans can deduce from the source
without having to run a separate analysis tool.

> Also I'm going to have to think about "computing a new value" vs. "modifying
> state" —not sure I quite get it...

It's kind of a subtle distinction. A value doesn't need to have any particular
locus of existence; semantically, we only care about its information content,
not where it exists in memory. As a corollary, for anyone to use that value,
we have to explicitly pass it onward.

On the other hand, mutation is _all about_ a locus of existence, since
ostensibly someone else will be looking at the slot you're mutating, and they
don't need to be _told_ that you changed something in order to use the updated
value. (Which is the root of the problem, quite frankly!)

------
Chabs
One thing that I'm surprised this doesn't cover, especially since it's so
C++-centric, is that modern C++ OOP is much more defined by lifetime/scope
management than anything else. What defines something as an object is the fact
that it doesn't exist until it is constructed, and doesn't exist after it gets
destructed (which is the case even for fundamental types, with the exception
of char/std::byte, btw).

Hot take: RAII has basically taken over everything else as far as structural
design foundation goes. Type erasure and encapsulation still play a role, but
it's not nearly as fundamental anymore.

~~~
stochastic_monk
RAII is my primary use for objects in C+++, and I believe Rust is similar.
Inheritance has fewer uses for me.

------
p2t2p
All my attempts to jump off OOP train break at exact same moment when I try to
write a unit test.

Closure: or, simply use this monstrous component pattern when 70% of your code
is boilerplate or hack into namespaces and override functions in them in
runtime. And don’t forget to do it in right order!

JavaScript: yeah, simply re-define ‘require’ before importing dependencies in
tests. Yeah, do it in right order.

Recent example - I was researching how to mock calls to functions in packages
in go... Well, the best thing you can do is to have package private variable,
assign function to it and use it throughout the code so you can swap it with
mock/stub in a test.

There is none of that bs when I write Java or C#. I have a mechanism to
decouple contracts from implementations - interfaces. I have mechanism to
supply dependencies to modules - it’s called constructor parameters. I can
replace implementations easily with mocks or stubs in tests without target
even noticing that.

Can somebody provide me with an example of this kind of decoupling achieved in
other paradigms _without_ hacking the runtime of a language or ugly tricks
like in go case?

~~~
loup-vaillant
Give it up. Mocks are mostly useless.

If you want testable code, the first step is to separate computations from
effects. Most of your program should be immutable. Ideally you'd have a mostly
functional core, used by an imperative shell.

Now to test a function, you just give it inputs, and check the outputs. Simple
as that.

Oh you're worried that your function might use some _other_ function, and you
still want to test it in isolation? I said give it up. Instead, test that
other function first, and when you're confident it's bug free (at least for
the relevant use cases), _then_ test your first function.

And in the rare cases where you _still_ need to inject dependencies, remember
that most languages support some form of currying.

~~~
humanrebar
That's a solid plan for acceptance testing and a great way to make sure you
can never test diagnostic, recovery, rollback, and other something-abnormal-
happened-here logic.

For small programs with few interfaces to worry about, that might be fine, but
as you number of users go up, the odds go up that you'll be ensuring rollback
bits get flipped when filesystems fail. None of that is simple to test without
some sort of dependency injection or other heavyweight design pattern.

~~~
yen223
Why would you use mocks to test those things?

~~~
humanrebar
Because manually triggering an optimistic locking failure is a pain.
Triggering one of those _and_ a filesystem write failure at the same time is a
whole pile of work compared to the mocking.

------
jf-
Software developers are systematisers by default. We tend to value complexity
for it’s own sake, hence the over-engineering common to software projects. The
methodologies we use fall victim to the same tendency. We build complex, rigid
rule sets that are claimed to improve software or development speed or
whatever else, without any actual empirical evidence that these claims are
true.

All you can really do is try to be knowledgeable about the methodologies, use
the right tool for the right job, and try to keep things as simple as
possible. And don’t subscribe to anybody’s dogma.

~~~
ken
> use the right tool for the right job, and try to keep things as simple as
> possible

Now all we need to do is get the field to agree on universally applicable
definitions of "right tool" and "simple", and never change any requirement
after any technical decisions have been made, and we'll be all set!

I had a manager who used the term "ice cream" for phrases like this that sound
good (everybody loves it!) but don't help drive any useful conversations or
decisions. Should we use the right tool for the job, or the wrong one? Let's
use the right one! OK, are we all agreed? Great! It's unanimous. Next issue.

Unfortunately, the 5 people sitting around the table each have a completely
different conception of what this means, so we're no closer to a decision than
when we started. It's simply not a useful guide or metric. I think it's mostly
code for "be quiet and do as I say".

~~~
jf-
Depends on how needlessly argumentative your team is. Generally you can reach
a consensus through discussion, example and experimentation. We do have
intuition, generally we know what simple looks like when we see it, likewise
we recognise what the right tool looks like as we try several.

Do you want a checklist for this kind of thing? You’re not going to get one.
You have to use your own judgement.

Possibly you’ve fallen victim to being on teams where ego dominates, and
members refuse to seek the best option unless they came up with it themselves.

------
maxxxxx
If we just had made inheritance as something to be avoided unless absolutely
needed then OOP would have probably never got such a bad reputation. All the
other concepts make perfect sense.

~~~
3pt14159
I've found that inheritance is very useful in one situation, and adds nothing
over mixins otherwise.

If a class does two broad things simultaneously then inheritance can work
great. For example, a User class that inherits from a DB mapper class. I don't
want to have to tell my class how to write a record to the DB. All that code
can be centralized into one thing and then relied upon for its uniformity
across all my models.

This isn't true the way most people use inheritance though. They do things
like Sword inherits from Weapon and Weapon inherits from Item. But this just
asks for trouble because as requirements get more complex there are more edge
cases and the complexity bubbles up into overriding the inherited methods,
which makes them less reliable from different calling contexts, or flipping
the OO script and pushing class-based-if-statements in the ancestor class.

Then you step back and say "why did we make Sword a weapon in the first
place?" and the answer was we had logic somewhere else in the code that did
things like check if a user was armed. Well we don't need inheritance for that
at all. We can use plain old methods and properties / duck typing.

~~~
hota_mazi
Duck typing is the worst of all worlds, in my experience.

And in defense of inheritance, the Liskov Substitution Principle is extremely
useful and makes a lot of sense.

If a function accepts a `Weapon` as parameter, surely you should be able to
pass it a `Sword`.

~~~
humanrebar
Except Firearms require the reload() method be called periodically between
calls to use(). So now we have to think about whether Swords need an empty
reload() method or whether there need to be separate Firearms and Blade
interfaces based on Weapon.

~~~
hota_mazi
Yup, that's how it works. And why inheritance and polymorphism is so popular:
it's powerful, easy to explain, and captures elegantly a lot of problems we
need to model.

In contrast, non class based languages such as Haskell struggle to model
problems that are trivial in OOP, such as how to reuse 90% of existing
functionality but override 10% with more specialized behavior. Good luck
solving that problem elegantly with type classes.

------
agentultra
For me, whenever someone invokes the GoF or SOLID, I’m reminded of the
Brothers Grimm. It’s programming by folklore. That’s all the GoF did: they
went out into the world and tried to observe how programmers were structuring
their programs. And they seemed particularly interested in programmers using
OOP.

All of these principles have very little basis or formal definition. Bertrand
Meyer did make some headway with Eiffel. But the type systems are so weak and
the lack of formal semantics makes all of these discussions a bit of hand
waving and bike shedding.

At least the DOD folks have some guiding philosophy and are trying to optimize
the design of programs to account for memory latency of modern hardware
architectures.

The OOP defenders are basing their argument on hot air and hand waving.

There are more interesting languages these days with better designs. Ones that
are based on better theories in my opinion.

OOP will be around for a long time if only because it has so many adherents
and people will be stubborn to change if history has anything to say about it.

------
LolNoGenerics
> This code may be typical of OOP in the wild, but as above, it breaks all
> sorts of core OO rules, so it should not all all be considered traditional.

This argument drives me nuts. A simple OOP environment can be quite easy to
grasp and (mis)use. Following all the golden rules and principles that SOLID
et al call for, are comparabily hard to internalize. A rookie can only fail
and has to collect, choose and study all the wisdom over the years until he
becomes a master. To that point he will create OO code that will break untold
rules. Ruling the majority of OO code out there as "bad" is hubris and ignores
reality. It is easy to do OOP wrong and hard to get it right. This imbalance
is proof enough for me that we don't know what we are doing, just justifing.
(Yes this may apply to other paradigms as well)

~~~
dkarl
_Ruling the majority of OO code out there as "bad" is hubris and ignores
reality. It is easy to do OOP wrong and hard to get it right._

It wasn't the inherent nature of OOP that caused that terrible style to become
dominant. It's a style that was actively taught and promoted since at least
the mid-1990s. It was taught, and taken for granted, that objects were
containers for mutable values. Deep inheritance hierarchies were taught as the
norm, not the exception. Java was built around this model and then became one
of the most popular programming languages in industry, and learning materials
for Java reinforced the style. Everyone interviewing for a Java programming
position from the late 1990s through the mid 2000s had to learn special jargon
related to this style and regurgitate it in interviews. We're suffering
through a hangover from decades of this horrible version of OOP being promoted
as the "right" way to write software in industry and academia.

------
ben509
This is the "you're holding it wrong" defense of OOP.

Of SOLID, S, O, I and D are rehashing structured programming design
principles. LSP is peculiar to it, and not an unreasonable way of thinking of
objects, but I don't see people struggling to figure out how to come up with
workable class hierarchies.

The _point_ of objects was that they were intuitive and easy, and where it
tends to fall apart is in the details. It's often small tasks like writing an
equality operator correctly that are absurdly complicated[1]. And while we can
construct reasonable class hierarchies, the interaction becomes a bear and the
bugs are subtle and confusing.

What I see in OOP programming is that people avoid various idioms or patch
around them because they don't trust their tools.

I think the problem with most OOP languages is that some high-level concepts
like inheritance were constrained by very low-level implementations, and they
often tried to glom several ideas together.

There's no "object algebra" even 40 years in. In C-like languages, objects are
using a "struct and vtable in the heap" model. In dynamic languages, they're
using the "type instance and a hashtable in the heap" model. Then they
typically declare that a value is really a variable, unless it's an atom, and
often other weird asymmetries like "the bottom type actually _does_ have a
value which is 'null'" and, of course, whatever weirdness they pick up such as
floating point.

Those constructs are then overused; this problem is especially apparent in
Java where "everything is an object" means that your class becomes your tuple
type, and if you want to combine two tuple types you're going to do that via
inheritance. In most of them, you don't have a proper discriminated union, so
all the stuff you'd do with sum and product types you now have to shoehorn
into classes whether it makes sense or not.

It all sort of works, but the reason OOP languages keep adopting non-OOP
features is that it doesn't work very well.

[1]: [http://jtechies.blogspot.com/2012/07/item-8-obey-general-
con...](http://jtechies.blogspot.com/2012/07/item-8-obey-general-contract-
when.html)

------
Reedx
Some other takes:

Casey Muratori (Handmade Hero) on why OOP is bad and how to get rid of that
mindset -
[https://youtu.be/GKYCA3UsmrU?t=4m50s](https://youtu.be/GKYCA3UsmrU?t=4m50s)

Mike Acton (Engine Director @ Insomniac Games) -
[https://www.youtube.com/watch?v=rX0ItVEVjHc](https://www.youtube.com/watch?v=rX0ItVEVjHc)

~~~
danschuller
Mike Acton works for Unity now.

------
Spearchucker
Took me a long time to grok OOP and OOD (was introduced to OOP in '91). At
first I thought I knew it, and then realized I didn't.

Plane into the side of the mountain, no survivors, call off the search. Which
is when I really started to learn (around '96/97). And now I love it.

Until I come across people who _always_ start by defining an interface first
and then think about what might follow. And dependency injection. Holy
priceless collection of Etruscan snoods DI makes me want to gouge out my
eyeballs with a rusty cork screw.

Like I said though I love it and today my happy land is about 80% OOP and 20%
everything else.

~~~
hacker_9
DI is bad, what? So how are you writing tests then.

~~~
Spearchucker
Like in procedural languages I write my own harnesses as needed. Because DI
adds complexity. And for testing all DI does is make testing easier. You end
up shipping all that complexity or you refactor your ship code. To be fair
that might be acceptable to many in a typical corporate environment.

~~~
vietjtnguyen
What do you mean by DI? I find DI is an overloaded term.

~~~
ahansen
In this context I believe he is referring to dependency injection.

~~~
vietjtnguyen
I guess I meant to ask what they mean by dependency injection. A bare version
is just passing dependencies as arguments and I can't imagine what is so
egregious about that. Maybe they mean something more complicated?

~~~
steveklabnik
Given "I write my own harnesses as needed", I read the parent as talking about
DI frameworks, not the general concept of DI.
[https://en.wikipedia.org/wiki/Dependency_injection#Dependenc...](https://en.wikipedia.org/wiki/Dependency_injection#Dependency_injection_frameworks)

------
evancox100
Author needs to actually state what ECS is. From context I don't think he/she
is referring to Amazon's Elastic Compute Service.

~~~
flohofwoe
It's Unity's (the game engine) new Entity-Component-System, the blog post is
an answer to this presentation:

[http://aras-p.info/texts/files/2018Academy%20-%20ECS-
DoD.pdf](http://aras-p.info/texts/files/2018Academy%20-%20ECS-DoD.pdf)

Unity's traditional entity system is suffering from a number of "OOP-isms"
which make it hard/impossible to optimize for performance.

The new ECS strictly follows a Data-Oriented-Design approach, where everything
is built around laying out the data in memory in a CPU-cache friendly way (and
a few other things that neatly 'fall into place', like spreading work across
CPU cores, a specialized 'high-performance' C# dialect, and the ability to
move work from the CPU to the GPU).

The big question is how the traditional Unity audience will react, since the
ECS programming model is quite a bit different from the old way, and it's no
longer as simple to build a game from a jenga-tower of adhoc-hacks ;)

~~~
TeMPOraL
I don't think Entity-Component-System is Unity-specific. From what I recall
from other articles, Unity has its own idiosyncratic implementation, but the
pattern has several slightly different interpretations.

~~~
_halgari
Correct, the first game to use ECS was Dungeon Siege back in 2002:
[https://www.gamedevs.org/uploads/data-driven-game-object-
sys...](https://www.gamedevs.org/uploads/data-driven-game-object-system.pdf)

~~~
Hodgman
Dungeon siege used an "Entity/Component" framework. That's a very different
thing to an "Entity/Component/System" framework.

------
pjmlp
Published in 1997, _" Component Software: Beyond Object-oriented
Programming"_, followed by _" Component-Based Software Engineering: Putting
the Pieces Together"_ in 2001.

[https://www.amazon.com/Component-Software-Beyond-Object-
Orie...](https://www.amazon.com/Component-Software-Beyond-Object-Oriented-
Programming/dp/0201745720/ref=sr_1_1)

[https://www.amazon.com/Component-Based-Software-
Engineering-...](https://www.amazon.com/Component-Based-Software-Engineering-
Putting-Together/dp/0201704854/ref=pd_sim_14_1)

The problem is how badly many schools teach OOP paradigms, and how many
frameworks abuse a specific style of OOP.

------
DrNuke
To be fair it was in the late ‘90s - early ‘00s a very clean way to make
communication work among teams, at a time still deeply into the non-internet
era and C or Fortran oriented.

------
bribri
I can't think of any oop abstractions that I prefer to functional
abstractions. If you really need has-a is-a relationships or mutability you
can get them a la carte with a language like Clojure, but they're not deeply
baked in to the language nor the encouraged pattern for extending code.

------
noncoml
The problem is not OOP, but how C++ and Java implement it. Ruby is a much
nicer OOP platform.

~~~
twic
How so?

~~~
noncoml
The most important part missing IMHO is late binding.

------
zdmc
A good heuristic: if you’re not dealing with “state” (i.e., Games, DB ORM,
Reinforcement Learning), then don’t use OOP

~~~
lerno
Really? I’d say the accidental distribution of state across objects due to
cross cutting concerns is exactly where OO breaks down for me.

~~~
Hodgman
If cross-cutting concerns are making your architecture unwieldy, you likely
haven't used composition enough / are doing OOP the bad way (tm).

~~~
lerno
No, some logic simply isn’t cleanly decomposable, plus the main problem here
is that objects lets you get away with implicit state (e.g. if (this.x == 0)
doA() else doB(); ) for long enough that when you start realizing you need
explicit state, it’s usually distributed quite a bit.

------
wolfspider
This is a pretty good discussion and I’m surprised that while discussing OOP
not much was brought up in the way of managing private versus shared memory
which I think depending on the platform is not as universal as we all hoped it
would be by now. Marshalling objects with pointers accessing vtables over
uneven terrain is how it goes down. I think here is a good example:
[https://trac.webkit.org/wiki/WebKitIDL](https://trac.webkit.org/wiki/WebKitIDL)

And to further that point there is no more Safari on Windows for this reason
among many others. Remember MemMaker? It’s crazy there were so few
applications we managed some of the memory ourselves but it worked very well
didn’t it? OOP really took off back then too and then memory utilities were
not needed and didn’t last too long. The convenience of not worrying about it
is one of the many things OOP was able to solve as it advanced. It is still
pulling off the same tricks today in a much more complex and metered way. OOP
does so much more than just this of course but the solutions developed with it
for managing memory are intense and as much art as science. So we should
question it and many paradigms to make this better. A mentor of mine when
explaining this would compare it to juggling...and then proceed to actually
start juggling while talking about his code. He’d stop and look up, just
pause, and say that’s all we are doing here just juggling.

------
faragon
More resources on DOD (Data Oriented Design):

[https://github.com/dbartolini/data-oriented-
design](https://github.com/dbartolini/data-oriented-design)

------
LiterallyDoge
I don't understand why this article is so angry? ECS is a great subset of OOP.
Both are helpful tools where they make sense.

~~~
learc83
ECS as they are using it in this blog is about Data Oriented Design. It's not
just OOP plus favoring composition over inheritance.

DOD explicitly advocates separating data from behavior, and is strongly
opposed to OOP in general.

~~~
LiterallyDoge
Just so I understand you right: they're advocating global functions to operate
on predictably similar data structures? If that's the case, it seems like
you'd want some of them to be object-oriented, and some not.

~~~
learc83
Each system tends to operate on different sets of data which don't tend to be
very similar, and systems may operate on multiple data sets.

You could build each system as an object, but you wouldn't want to store the
relevant data structures within that object because other systems will likely
need to use those data structures as well.

The entire architecture is predicated on separating data and behavior. Yes you
can build an ECS system using classes, but nothing about it fits into what
you'd call OOP.

------
heinrichhartman
Does anyone know of a good reference for idiomatic OO(P)?

Like the Codd paper for Relational Algebra.

~~~
noblethrasher
Well, the best analogue to the Codd's relational algebra is Hewitt's actor
model in my professional opinion. Both are based on mathematical formalism,
though the Actor Model goes a bit further in that it's also informed by
physics.

But, just as SQL doesn't really implement Codd's relational algebra, so it is
the case that most so-called OOP languages miss the mark vis-à-vis Alan Kay's
original conception.

The analogy is buttressed by the fact that many early RDBMSs didn't even
support joins (path independence being an essential characteristic of RA),
just as many mainstream OOP languages didn't/don't _idiomatically_ endow
objects with a strong way to protect themselves (encapsulation being a
necessary characteristic, thus pervasive use of setters being the main sin)).
But, Kay did praise Erlang for getting OOP right.

~~~
da02
Kay also mentioned the Internet is an OO system. At one point he was thinking
every object would have it's own IP address.

He does have an account here on HN.
[https://news.ycombinator.com/threads?id=alankay1](https://news.ycombinator.com/threads?id=alankay1)
I hope he doesn't get tired of explaining the same things over-and-over again.
I have asked questions and he has answered, but I usually end up
misinterpreting the ideas. :(

~~~
noblethrasher
Yep, I follow him pretty closely, and knew about his account and comments on
here.

Funny story: That is at least the _second_ account that he created on HN. He
registered an earlier one just to reply to a comment that I had made[1].

[1]:
[https://news.ycombinator.com/threads?id=alanone1](https://news.ycombinator.com/threads?id=alanone1)

~~~
da02
Ha ha. I always wondered why that lonely one-comment account existed. I
suspect many people are using data abstraction and calling it OOP. I am not a
professional programmer. What do you like to use when designing software?
Functional? Types? C? Haskell? Pharo?

------
std_throwawayay
OOP is just a mental model. Deep down everything is made of bits. The church
of OOP has failed but if something looks like a duck, walks like a duck and
talks like a duck it probably is useful to make a duck class. We're now down
to fighting for nuances. You can do most things with OOP or without OOP but
each path has some upsides and downsides and most of the time it's good to use
some things it provides where it makes sense and not get too religious about
it. The great architect has the foresight on how the code will be used in five
years and design it accordingly.

~~~
Waterluvian
I think this relates to what you're saying.

I've never felt any frustration that OOP feels like the wrong tool when I'm
using languages that give me the choice to use it or not (like Python, and
JavaScript). But when I'm using Java, as one example, it often feels like I'm
really locking myself into a design up front.

In Python, especially. I'll find myself starting off all experiments or simple
projects with functions and basic data types. As something evolves and I want
some semantic clarity I'll stop using dicts and start using namedtuples. And
then at some point I may replace the namedtuples with classes. From there I
may discover value in having subclasses so I'll add a few (but this is
exceedingly rare in my line of work).

~~~
Chabs
But that's the whole point of JAVA. It's an opinionated platform with a hyper-
standardized workflow. Sure, that limits creativity, but in many business
contexts, the last thing you want is your programmers getting "cute".

There's a straight line from requirements to implementation; no meandering
involved. At least that's the theory. In practice...

~~~
bunderbunder
> But that's the whole point of JAVA. It's an opinionated platform with a
> hyper-standardized workflow.

That may have been where Java wanted to go, but, when I'm working in Java, I
don't feel like that's where I am. Ways of doing things in Java tend to be
wildly inconsistent from project to project. Partially, I think, because so
much core functionality in the Java ecosystem was allowed to be federated out
to 3rd-party projects for so long. Take the long-standing popularity (and
rivalry) of Guava and Apache Commons for handling even basic tasks that are
hard to get done using the core Java APIs. If there's such a thing as a
"platform smell", I'd say that certainly qualifies.

With Python, on the other hand, there is a fairly consistent common
understanding of what "Pythonic" means, and, even when there really is more
than one way to do it, the question of which one to use can usually be quickly
resolved to a predictable outcome by simply pointing out that one option is
the more Pythonic way to do things.

(edit: Though, to be fair, Java was first released into a world where
languages like C, C++ and Common Lisp represented the status quo. Expectations
were lower at the time.)

~~~
mikmoila
To be honest, I've never seen a problem I couldn't solve in easier way with
core modern JDK libraries than with "3rd party" libraries.

~~~
bunderbunder
It's definitely gotten better over the past 5 or so years. But there was a
_lot_ of time spent acquiring technical debt over the preceding couple
decades.

Even if I don't use Guava or Apache Commons myself, for example, I still
occasionally run into dependency conflicts that I need to resolve with awful
hacks like package relocation because so many other major libraries rely on
one or the other, and neither library is a particularly great citizen about
breaking changes.

------
Sharlin
I’d be very wary of hiring an ”OO” dev who can’t reasonably formulate what the
SOLID principles are and why they exist.

~~~
gambler
SOLID is mostly a bunch of over-confidently stated opinions.

Here is my over-confidently stated opinion: if a principle is not applicable
to Smalltalk 74, it is not essential to OOP.

The only rule in SOLID I would say someone should follow at all times is L,
and even then only in statically typed languages.

This should be _obvious_. For example, can you define "responsibility" in any
way that's not entirely gut feeling? No. So how can you fault someone not
knowing about a rule that's based on gut feeling?

~~~
jschwartzi
It's not really gut feeling. The following is very abstract but it's how I
think about the "single-responsibility principle."

When you're designing a software system you need to step back from the
individual components for a moment and consider the overall system -- who is
using this system, what is the problem being solved here, and how does this
solution relate to other problems and solutions within the system. It's not
obvious only if you don't completely understand the problem domain, which most
people don't.

If you find yourself thinking that some object's responsibility is ill-
defined, you need to talk to whomever authored that part of the code to see if
there's something you're missing about the system that the code controls.
Either you are or they are, and at any rate the conversation will make the
software system better somehow, which is the goal.

So once you understand the system and you understand the problem that you're
trying to solve, and you think you have a solution, it's an exercise for you
to sit down and identify specific activities that the software system has to
undertake in order to solve the problem within the context of the system. You
can consider these "responsibilities" at a top level and start partitioning
them up further into classes that mutate and emit data in response to data
received from other components of the system. The boundaries of these classes
are interfaces, and each interface should be responsible for operating on a
specific kind of data in response to its inputs.

Now the interface itself also needs to be defined in terms of the problem the
class is solving. So you use words like "setMotorSpeed" on a class called
"ConstantSpeedServo," and your ConstantSpeedServo might be composed with a
"ValveController" which is responsible for controlling the position of a valve
based on some input data.

This all goes back to thoroughly understanding the system you are controlling
so that you can write code in terms of that system, but it goes beyond that in
that you have to be very explicit in your understanding of what it is the
system does in order to write software that accomplishes that goal.

------
beders
I think OOP is fine as long as it's the only thing you are doing and as long
as it is single-threaded.

Problems will occur if you need to convert the innards of your objects to data
(i.e. JSON etc.), if you need to materialize objects from data (i.e. ORM) or
if you need to write multi-threading safe code.

If you expose things as data, just be honest about it and treat it as such.
It's already out in the open, why hide it in objects again?

~~~
jschwartzi
> I think OOP is fine as long as you are doing it and as long as it is single-
> threaded.

I don't really see how threading has anything to do with it.

What OOP allows you to do is choose what terms you want to express your
solution in. You can choose language that exposes thread-safe aspects of your
problem domain without requiring the user to be aware that they are expressing
things in thread-safe terms. In my last project one of the early elements of
our architecture was a thread-safe message queue for each thread to receive
messages from across the thread boundary. Discussions on our team involved
talk of messaging and threading because those were the abstractions we chose
to use to express our solution.

We knew that order of access was important and so we chose to make it explicit
in the language we used to express the problem. And the abstraction we chose
allowed us to hide the details of the problem in such a way that we did not
have to worry about memory protection as long as we were consistently using
the architecture we built. That was a concern of the messaging system's
maintainer. We could just as easily have chosen rows and columns and operators
on such and expressed the same solution in those terms. What mattered to us
was making sense of the problem, domain and solution.

It really doesn't matter which paradigm you use. What matters more is that you
express your solution in terms that are consistent with the problem domain,
and that you actually make an effort to understand the problem you're trying
to solve well enough to express the solution. Anything beyond that is just
sugar.

~~~
legulere
OOP is linked strongly to mutable objects, which is inherently thread-unsafe.
Sure you can get thread-safety in OOP, but it’s hard. It’s the same reason why
global variables are bad but worse.

Mutability is rarely needed and often makes things more complicated than
needed.

~~~
jschwartzi
Well, my original point is that thread-safety is easy with the right
abstractions, and that choosing your abstractions is a conscious thing. So I
don't see how mutability is "inherently thread-unsafe." Rather, programmers
are "inherently thread-unsafe" and we have a responsibility to understand the
problem domain and choose abstractions that make sense to us and that actually
solve the problems we're encountering.

Now if you want to talk about specific languages, then I would generally agree
with you about C++ and C, but then there are ways you can design a program in
those languages to be thread-safe. With FP you're just choosing a different
solution to the problem which comes with its own drawbacks in some cases.

~~~
legulere
I just doubt that those abstractions are easy and would say that they often
break. Blaming the programmer for thread-unsafety in e.g. Java is like blaming
programmers for memory unsafety in C.

I agree if you stay in purely functional programming languages you run into
points that are solved better with mutability. However those parts of a
program are small and most parts are expressed better by pure functions.

~~~
jschwartzi
I wouldn't say that they "often break" considering that we never had a single
issue with our software that could be linked to any of our threading
abstractions in the 4 years of rigorous development and testing. But rather
it's important to be clear about how your abstractions are to be used, and
that you don't inject a lot of special-case logic into it. In our case we had
to accept significantly reduced performance in exchange for the abstractions,
but in the end it was worth not having to debug everyone's thread-safe code
separately. A lot of it came back to choosing an abstraction that was simple
enough to be well-understood and about not pushing the envelope too far.

------
jorgeleo
"Before you decide that OOP is shit and ECS is great, stop and learn OOD (to
know how to use OOP properly) and learn relational (to know how to use ECS
properly too)."

This is why DDD needs to be in place before using OOP

~~~
LolNoGenerics
So DDD is now a prerequiste to use OOP? Wow!

------
tomelders
Is FROOP a thing? Cos that’s what I’m doing it seems.

~~~
da02
Functional Reactive Object Oriented Programming?

~~~
tomelders
Yep.

------
leowoo91
OOP in game development is in good use with engines like UE4 and Godot, so it
is about choice. What really matters is how comfortable you are with the way
you develop the game.

~~~
gracenotes
Oddly enough I've been getting into UE4 and things would be so much easier for
me if the core classes were better decoupled.

For instance, I'd like to have a quadraped skeleton with the ability to apply
a movement vector. Unfortunately, in the Actor->Pawn->Character hierarchy, you
cannot have movement (applying vectors, walk/jump/fall state) without a
capsule root component - which is the only one respected for collision - so
either I have to rewrite all of the movement code or somehow make the capsule
vestigial and carefully maintain its state.

In short, in some cases it's useful to say that a Pawn is an Actor with a set
of extra features, or that a Character is a Pawn with a set of extra features,
but even there, the way inheritance forces you to adapt an exact set of extra
features and no others and also those features are not piecewise reusable
elsewhere is a pretty big structural problem.

I've already done enough arguing about OOP for one career in programming, I
think, but the more I see what's out there, the more that I think the real
enemy is any use of class inheritance - even the TAPL formalization of OOP
uses interfaces only.

~~~
leowoo91
I certainly agree, I ever used ECS from the beginning. Just surprised when I
saw a new engine like godot follows OOP. It might be because engines are
better fit but not the game itself and gets omitted easily.

------
SolarNet
Everything here is bad and wrong on so many levels. It's hard to even wrap my
head around the amount of wrong going on everywhere here.

The initial Object Oriented (OO) code - as partially demonstrated by the
author, and more succinctly by the grand-author - is badly designed. In the
grand-author's slides they remove a number of these stupidities, which the
author appears to ignore (both for final performance comparisons and for
understanding why they were even there). Notably they (the grand author)
originally built a reflection (runtime type) system [0] directly into their
game objects (application level code). This of course means they then ended up
fighting their programming language over the performance of this as they
attempted to use and improve it.

The author's improved code is also a total failure. He removes a significant
amount of the flexibility in the original system, which was a requirement of
the problem space. By removing features he gains a significant amount of
performance (in actuality about 2x - not 10x - compared to the grand author's
slides). The lack of understanding why those features were there in the first
place demonstrates he doesn't understand the actual design space of the toy
demonstration code. It also demonstrates he failed to read the grand-author's
slides where they go through some of these changes and why they aren't viable.

The author also fails to even discuss the Entity Component System (ECS) which
it should be noted _is still faster_ (and feature intact!) than the author's
code. I cannot be more emphatic on that point, an ECS is a solution to the
performance of run-time composition problems, and it still worked even better
than an attempt to do the OO solution right (again by removing features!).

Though the author makes an excellent point about how most ECS solutions tend
to fight the programming system they are in, he doesn't really explain or
demonstrate it. What the grand-author did wrong here (though probably actually
not in their talk considering who they work for) is not point out that the ECS
should be part of one's programming tool to be most effective. Notably the ECS
optimization should be part of one's programming language [1] (a game engine's
programming language - like Unity's - would count).

Point is this author is very wrong and does not understand design (in general)
nor the entity component system pattern in specific at all. The grand author's
example code base is disingenuous of actual OO principles, and does not
provide reasonable advice on how to deploy an ECS. Oh and they both got wrong
that ECS solutions should be part of one's programming tool, not application
space.

[0] Engines are often attempting to solve a sufficiently complex problem space
that they require runtime type systems. This is not something one can avoid
unless writing a bespoke "engine" for a single game.

[1] Unless your programming language is sufficiently advanced to allow
creating - effectively - compiled code at runtime as a generalized library.
Which C++ is because of template meta-programming, but the included example
code is not.

~~~
ByThyGrace
Can you cite or link to properly implemented ECS solutions, in your opinion?

~~~
SolarNet
I think the C++ library
[https://github.com/skypjack/entt](https://github.com/skypjack/entt) gets
pretty close to the ideal for a purely compile time implementation. It is
implemented with modern C++ template meta-programming - which is to say it is
basically a DSL encoded into C++ and can generate arbitrary code at compile
time - while better than it used to be to read C++ template meta-programming
is still very difficult to read. It also comes with a number of utility
libraries for using it efficiently and correctly, including a signal library,
service locator, and scheduler (among others).

It still has it's limitations though. Because it is a compile time library, it
can't do any runtime optimizations like it could with a JIT. Also, because C++
reflection is atrocious, it doesn't provide the best support for runtime type
manipulation (reflection features are often paired with the component pattern
implementation, especially in general purpose game engines). This comes back
to dynamically meta-programmed languages (e.g. python, lisp) that could
theoretically out perform even ENTT using a JIT - with the appropriate
reflection features - and meta-programming. However an ECS only really makes
sense in a non-garbage collected language (or if the language exposes a non-GC
meta-programming interface). Since there aren't really any viable JIT (+
reflection) + meta-programmed + non-GC languages out there, I don't think a
perfectly correct implementation exists.

I think an ideally implemented example will eventually be Johnathan Blow's
experimental language Jai (or a library for it) given what has been stated
publicly about it (metaprogramming, JITted, reflection, not garbage collected,
has built in support for SoA types). But that's not a viable example to look
at at the moment.

------
spacesarebetter
and the website is dead as well

