
Pretending OOP Never Happened - zdw
https://www.johndcook.com/blog/2020/05/15/pretending-oop-never-happened/
======
commandlinefan
Usually when I see somebody arguing against object-oriented programming, I
don't see them arguing for functional programming, but instead arguing for
procedural (like Cobol, Basic, or Pascal) programming. What they usually miss
is that there's a reason procedural programming was abandoned some time around
the mid-90's, and for good reason: you can't realistically develop useful
software in pure-procedural way without introducing a lot of global state.
Even if you look at well-designed non-OO code like, say the Linux kernel,
you'll see that there are object oriented concepts like information hiding and
polymorphism all over the place; they just don't formalize them with the
"class" or "private" keywords. Unfortunately, what I see most programmers do
is give up on OO design (and never even consider FP) and instead create global
state that they call "singletons" to pretend that they didn't just create a
global variable. Because, as we all know, global variables are bad, but too
few of us actually remember why.

~~~
btilly
_you can 't realistically develop useful software in pure-procedural way
without introducing a lot of global state. Even if you look at well-designed
non-OO code like, say the Linux kernel, you'll see that there are object
oriented concepts like information hiding and polymorphism all over the place;
they just don't formalize them with the "class" or "private" keywords._

You are misattributing that to OO programming.

The original edition of _Code Complete_ , written in 1993, makes absolutely no
mention of OO programming or any OO principles. However it has a very good
discussion of information hiding, and procedural code written in the various
styles that it recommends do not create a lot of global state. (And yes, I
have worked with such code.)

One of the best things about OO was that it pushed programmers who had not
absorbed best practices towards that. However OO encourages combining the
principle of information hiding with OO notions of inheritance. Many learned
the hard way to prefer composition over inheritance. (Many, unfortunately,
never learned that. Just as many procedural programmers never learned about
information hiding.)

~~~
arc776
I feel like inheritance is the actual problem with OOP. Trees of types with
increasing specialisations just don't describe many real world problems that
well.

When you do build programs like this, code gets more and more rigid until it
becomes a maintenance nightmare, and extremely expensive to pivot
functionality. Sometimes having a complex object hierarchy literally stops you
making some new feature.

A lot of mental effort is sucked up in mashing problem spaces into
hierarchies, and once there the program is constrained by them.

~~~
zozbot234
It has little to do with subtyping per se and everything with the way that
implementation inheritance, specifically, breaks modularity. The whole idea of
inheritance _contra_ composition is that every call to a virtual method--
including calls _internal_ to the class-- goes through a dispatch step so that
the _actual_ code that gets executed depends on whether the object is one of a
"derived" class, where that method might have been changed in ways that might
break any amount of expected invariants. This introduces brittleness both in
base class code (which generally has to call methods that might get overridden
in derived classes) and in the derived class itself (which has no way of
knowing what invariants might be expected of it as part of base class code).

Implementation inheritance is often justified as a way of "reusing" code, but
as it turns out, we've merely introduced undesired coupling instead of the
seamless reuse we might have expected.

~~~
temac
> where that method might have been changed in ways that might break any
> amount of expected invariants

It's not supposed to, and if it does, the design is clearly broken. Now the
(rhetorical) question is: does it happens often. The not so rhetorical
question is: is it possible to make it happen rarely. The even more
interesting question is: is it easy / more practical compared to alternative
approaches, and if not then what is the point.

My opinion on SOLID is that there is precisely one hard "principle", and it is
worthwhile: the LSP (it derives directly from logic, that's why). I believe
that Open-close is even borderline insane, and that if there is a crazy way to
make it not insane, this way is probably applied by so few people that it may
well be irrelevant -- most people will try to apply it in ways that will
quickly put them at risk for LSP violation, but LSP is far more important (or
they will try to apply it in bad places, but that is another story). Plus
programs designed with complex hierarchies are often missing the point of
execution contexts, and then understanding them is absolute hell -- their
original authors sometimes do not understand them themselves. (The rest of
SOLID are soft attempts at fixing self-inflicted wounds, _sometimes_ even
reasonable if you insist on doing Javaesque / old-C++-esque OOO, but I
digress.)

I'm not sure why anybody thought that kind of OOO was a good idea, or that the
main characteristic in interesting big programs was the usage of "classes". I
even find the suggestion of causation instead of mere correlation dubious;
they were already quite a good number of big programs before, and what
permitted the explosion of program size was more the ever growing capabilities
of computers, that happened, in affordable versions, during the Java-like OOO
hype. Besides the simplistic reductionism (we can start with: which classes
model entities, which classes model values, which classes are controllers,
etc.) which is not too much a big deal in practice, modeling with class
diagrams is often missing the river in the middle of the forest for some
groups of intertwined trees.

~~~
kragen
What does "OOO" stand for? Usually it means "out-of-order [execution]" but
that doesn't make sense here.

The Smalltalk-80 container hierarchy demonstrates that you can get a lot of
mileage out of simple single-dispatch virtual methods with inheritance. The
Smalltalk-78 system, which you can try at [https://lively-
web.org/users/bert/Smalltalk-78.html](https://lively-
web.org/users/bert/Smalltalk-78.html) (although it's not working for me at the
moment), got a multiwindow GUI with an IDE running usably on an Intel 8086
with 256KiB of RAM, in only about 100 classes and 2000 methods totaling 200
KiB of code. This is not what I would describe as a "big program", but it is a
fairly impressive program nonetheless. Seeing that kind of thing is what led
people to adopt object-oriented programming.

~~~
xkriva11
alternative version:
[http://www.cdglabs.org/thinglab/](http://www.cdglabs.org/thinglab/)

------
svat
“OOP” is one of those terms (like most terms) that have a narrow meaning and a
broad meaning, and either proponents or detractors can use one of them in
arguments:

• OOP, narrow sense (emphasis on “object”): data-with-associated-functions,
encapsulation, etc.

• OOP, broad sense (emphasis on “oriented”): organizing your program around
objects that model the nouns in the problem domain, “everything is an object”.

A (satirical) illustration of the latter is in the first few paragraphs of
[https://caseymuratori.com/blog_0015](https://caseymuratori.com/blog_0015)
(from 2014), which gives an example where, to write a payroll system, you
first start by designing classes for “Employee”, “Manager”, etc.

A (non-satirical) illustration is in the infamous “TDD Sudoku” series of blog
posts (follow the links from [http://ravimohan.blogspot.com/2007/04/learning-
from-sudoku-s...](http://ravimohan.blogspot.com/2007/04/learning-from-sudoku-
solvers.html) or
[https://news.ycombinator.com/item?id=3033446](https://news.ycombinator.com/item?id=3033446))
where (in contrast to Norvig's program which just solves the problem) the
TDD/OOP proponent ends up with a “class Game”, “class Grid”, “class Cell”,
“class CellGroup” (with derived classes “Row”, “Column”, and “Square”), but
ends up nowhere. (From Seibel's post: “…got fixated on the problem of how to
represent a Sudoku board. […] basically wandered around for the rest of his
five blog postings fiddling with the representation, making it more “object
oriented” and then fixing up the tests to work with the new representation and
so on until eventually, it seems, he just got bored and gave up, having made
only one minor stab at the problem”.)

I think we can agree that this sort of object _oriented_ programming can hurt
a lot (thinking about objects is not a substitute for solving your problems,
though it is tempting), while _objects_ themselves are useful.

~~~
jmchuster
> I'm sorry that I long ago coined the term "objects" for this topic because
> it gets many people to focus on the lesser idea.

> The big idea is "messaging" \- that is what the kernal of Smalltalk/Squeak
> is all about (and it's something that was never quite completed in our Xerox
> PARC phase). The Japanese have a small word - ma - for "that which is in
> between" \- perhaps the nearest English equivalent is "interstitial". The
> key in making great and growable systems is much more to design how its
> modules communicate rather than what their internal properties and behaviors
> should be.

[https://wiki.c2.com/?AlanKaysDefinitionOfObjectOriented](https://wiki.c2.com/?AlanKaysDefinitionOfObjectOriented)

~~~
svat
Well, regarding that quote (interesting no doubt), see some important caveats
at [https://www.hillelwayne.com/post/alan-
kay/](https://www.hillelwayne.com/post/alan-kay/) which goes into some detail
on evolving views about OOP (including that quote). It turns out that Alan Kay
did _not_ coin the term “objects”, though he did coin “object-oriented
programming”, but even in Smalltalk, messages were only one idea of three: the
post concludes that “OOP consisted of three major ideas: classes that defined
protocol and implementation, objects as instances of classes, and messages as
the means of communication.”

------
pinopinopino
I don't find this article well thought out. One, the writer presents OOP and
FP as opposing paradigma's. While object oriented programming is orthogonal to
functional programming. You can have a purely functional programming language
with objects.

Two:

    
    
        100% pure functional programing doesn’t work. Even 98% 
        pure functional programming doesn’t work
    

Doesn't work for what?

I think pure functional programming has its place, it is not just for
everything you want to do. But object oriented programming is also not meant
for every domain. It is like saying scalpels are stupid, because you can't
build a house with them.

~~~
jayd16
The objects would be fully public structs only though, yes?

>Doesn't work for what?

Doesn't work for performance, too much copying of immutable state.

If 100% pure means zero mutation, you can't even fit the definition of a
program and write output /s

~~~
ilikehurdles
Immutable data structures and algorithms don't copy immutable state nearly as
much as one might assume, and the vast majority of our field doesn't write
software that is affected by the tiny performance differences between the
underlying implementations of OOP Java versus functional Clojure, or perfectly
optimized OOP JS versus naive functional Elm.

~~~
jayd16
Sometimes its fine and sometimes its not but that is why 100% pure isn't
feasible. I personally try take functional ideas into my OOP language as much
as possible.

------
overgard
I think if we called it "binding functions to data", instead of "objects" we
would have a much different view of how important OOP is. There are certainly
times that binding functions to data can be useful, but I think if you tried
to tell someone "im making a language where all functions must be bound to
data" they would think you're crazy. (Even though that's basically what Java
is)

There are so many times you don't want to bind functions to the data they
operate on, and languages that force OOP always make it painful

The other thing OOP gives you is type taxonomies, but frequently those are an
anti-feature. Almost every experienced developer I know thinks inheritance is
usually a bad idea.

~~~
cy_hauser
I'd change "binding functions to data" to "binding functions to types." The
"data" is an object of a particular type. The binding to the actual data
(object) can take place at compile time or run time, depending on need. IMO
that makes it a bit less weird and more just shifting the position of a
parameter.

~~~
overgard
Well, the reason I say binding to data instead of binding to type is that
runtime polymorphism on instances is one of the selling points of OOP. I think
its a little overrated but that is a good reason to use OO if you need that
functionality. If you're only attaching functions to a type there isn't much
difference between "thing.fn(x)" vs "fn(thing, x)"

------
implicit
The author seems to be implying that we have a hard choice ahead of us with no
middleground: We can either accept object oriented programming, or we can turn
to pure FP.

The OCaml community presents a pretty compelling third option:

OCaml is billed as a 'functional language,' but it doesn't do anything to
prevent you from performing mutations or executing side effects anywhere you
want. It even has builtin syntax for "for" and "while" loops.

Interestingly, OCaml does afford classes and objects but hardly anyone seems
to use them. It's not that OCaml objects are weird or difficult or bad in some
way. People just choose to write records and functions. Some of those records
have functions in them. Some of the functions mutate state.

In the OCaml world, at least, pretending OOP never happened seems to have
worked out just fine.

~~~
autokad
> "The author seems to be implying that we have a hard choice ahead of us with
> no middleground: We can either accept object oriented programming, or we can
> turn to pure FP."

I disagree. I dont think that is what the author was saying at all, not even a
little bit: "100% pure functional programing doesn’t work. Even 98% pure
functional programming doesn’t work. But if the slider between functional
purity and 1980s BASIC-style imperative messiness is kicked down a few notches
— say to 85% — then it really does work. ... It’s possible, and a good idea,
to develop large parts of a system in purely functional code. But someone has
to write the messy parts that interact with the outside world."

~~~
implicit
The problem is the way the author instantly jumps from "98% pure functional
programming doesn't work" to "you should use OOP."

In order to bridge the gap, you have to make some pretty terrible assumptions:

If your code is not OO, it must be FP.

If your code is FP, it must be pure. Therefore,

If your code cannot be pure, it must be OO.

------
FeepingCreature
> 100% pure functional programing doesn’t work. Even 98% pure functional
> programming doesn’t work. But if the slider between functional purity and
> 1980s BASIC-style imperative messiness is kicked down a few notches — say to
> 85% — then it really does work. You get all the advantages of functional
> programming, but without the extreme mental effort and unmaintainability
> that increases as you get closer and closer to perfectly pure.

This is how we (try to) write code where I work. Pure algorithms, immutable
data types, single point of change. D is pretty good for this, but could be
better; ranges allow you to be pretty expressive when composing operations,
but immutable types have a lot of problems still.

It actually combines well with OOP. We tend to mostly use classes to group
methods and express domain dependency, rather than manage a tiny state subset.
So in maybe half to two thirds of our classes, all the fields are set once on
startup and then never changed again.

------
leephillips
“Object oriented programming, for all its later excesses, was a big step
forward in software engineering. It made it possible to develop much larger
programs than before, maybe 10x larger[...]OOP made it possible to write
programs that could not have been written before”

Teams were writing million-lines-of-code Fortran programs in the 1980s to
simulate the atmosphere, nuclear weapons, etc. Somehow I think these remarks
need some qualification.

~~~
danielscrubs
Yeah, the author seems to rewrite history quite heavily. There is even
arguments here that the Linux kernel is OOP, in case we need to have a clear
definition of what OOP is, because right now this HN-discussion is quite
"fluffy".

------
DonaldFisk
> That has been my experience. I hardly ever write classes anymore; I write
> functions. But I don’t write functions quite the way I did before I spent
> years writing classes.

> And while I don’t often write classes, I do often use classes that come from
> libraries. Sometimes these objects seem like they’d be better off as bare
> functions, but I imagine the same libraries would be harder to use if no
> functions were wrapped in objects.

I noticed this soon after I got involved with Java. Programmers were using
classes where I would use methods. They would have objects which did things
where I would have passive objects which have things done to them by methods.
A specific example might a Parser class with a parse method, where I would
define an entirely passive Grammar object with a parse method, and of course a
Tokenizer object where I would use a tokenize method, calling an
InputStreamReader class (instead of just InputStream) with a read method. I
concluded that any class name ending in "er" was almost invariably unnecessary
overhead.

I wondered where this came from. All I could think was in object orientation's
roots in simulation, where it's entirely appropriate to have objects which do
things, and it was copied from there.

~~~
MH15
This is why currently I am so happy with ES6+/Typescript. The ability to
define strongly typed classes but also have methods really improves the OOP
workflow to me. Classes should rarely be used for things that are unique,
especially a "parse" method or etc. I recently had to make two modules- one to
format a message and one to parse the message. Instead of classes, I can just
use functions, outside the scope of any Class objects.

~~~
cardiffspaceman
Sometimes the best function for the job is a free-standing function, e.g.
parse(Grammar, Input). I was leaning this way when I read some quotes from
Bjarne Stroustrup which touched on the subject. Here is a link to his personal
FAQ which sums it up.

[http://www.stroustrup.com/bs_faq.html#oop](http://www.stroustrup.com/bs_faq.html#oop)

------
MAGZine
This is just functional core, imperative shell. Which is to say, keep generic,
immutable things generic and immutable. Build libraries to implement ideas and
concepts in your domain. And then when it comes time to implement the high-
level business logic, use an imperative shell to harness the power of all of
those concepts you created. Keep it logical, straightforward, easy to follow,
and well organized.

Like the author says, OOP is an organization tool more than anything and like
"strategy," "adapter," "Command," "builder," etc signal specific things to
your colleagues that IMPROVE understanding and time-to-grok.

~~~
memco
Thanks for identifying the paradigm.

I've been working on refactoring some code, and I really like the functional
style so originally wrote most of the logic in discrete functions. Multiple
functions rely on some similar dependencies so there are some unique
parameters for the function and then a handful of parameters that are the same
for all functions (stuff like a database connection). It seems good that each
of these functions can accept those dependencies in the signature, but it's
likely that each of those functions is going to need that same dependency for
the whole pipeline. At some point it looks to me like a small object that can
handle the initial setup and management of those dependencies is helpful.
Functional core imperative shell seems kind of like what I'm doing? Others
mentioned there is some merit to being able to reduce the state from global to
local in some form and OOP may be a useful for that case. Recommendations for
other ways to handle these situations would be helpful as it's possible I'm
only reaching for an OOP wrapper because that's what I know.

------
dpc_pw
It would have been better if OOP actually never happened. OOP is just reaaaaly
bad. One of this ideas that looks appealing, but doesn't work in practice.

Inheritance is often a space-goat, but it's not even such a big of a problem
with OOP. Core ideas behind OOP are misguided:

    
    
        Abstraction
        Encapsulation
        Inheritance
        Polymorphism
    
    

Encapsulation on a such a fine-grained level as each object/class is like a
person putting padlocks on each pocket, so that right hand can't grab thing
from the left pocket. The granularity of encapsulation in OOP is just sooo
impractical. The right granularity for data-hiding are data-stores, layers,
APIs, (micro-)services, modules, not each single little bit of data.

Abstractions are costly. They can't be a goal in itself. Abstractions should
be applied only where they are needed and beneficial, where benefit of adding
them outweighs the cost. Similar with Inheritance and Polymorphism - they are
abstractions that have a cost. Sometimes worth-it, sometimes not really.

Object itself is a bad idea. Passing around references to objects (data with
attached behavior, instead of POD - plain-old-data) forces you to scatter your
data across many tiny bits, which makes everything way more complex and slow
(poor cache locality, layers and layers of indirection). Instead of passing an
Id or bunch of fields, now you're passing data plus some abstractions to
manipulate it everywhere. You generally get a graph, and graphs are the most
general and thus most difficult to use data-structure out there. Coordination
becomes a nightmare really quickly.

OOP is just a busy-work, and trying to pretend that you can just play with
abstraction and taxonomies and ignore the fact that effectively your software
is supposed to manipulate data to give an expected result, and not be a little
god-game of modeling the world. Ignoring that there are some ways to structure
your data that supports well what you're trying to do, and a graph of abstract
object is very rarely the best choice.

And so on... [https://dpc.pw/the-faster-you-unlearn-oop-the-better-for-
you...](https://dpc.pw/the-faster-you-unlearn-oop-the-better-for-you-and-your-
software)

------
evdev
It seems like there's something deficient in the way we tell the story of the
history of software architecture (to the extent we tell a story at all) in
terms of the name-brand techniques and technologies involved, rather than in
the actual layout and organization of actual codebases.

For a while I've assumed that OOP as in C++/Java essentially formalized
modular programming in C. In other words, that people were already writing
programs whose state was divided into functional areas, with some functions
serving as the interfaces between the modules. With a class-based system you
can rigidly formalize this; and then OOP as we use the term essentially just
reinterprets this formalization as actually creating the architectural
paradigm that had already evolved as programs grew.

(This is NOT meant as the one way to sum up the whole world of things
identified as or related to "object-oriented programming".)

But I wasn't around at the time...

~~~
jschwartzi
This is my thinking too. It's really silly to have wars around programming
paradigms. There are only a few principles around which we're all arguing:

* How do we make programs that are easy for the machine to execute efficiently? * How do we make programs that are easy for humans to read and understand? * How do we ensure, given the maintenance requirements of our programs, that another human who doesn't have the benefit of our experience can safely make changes to our programs without unitended consequences?

Discussions around OO versus Functional versus Procedural miss the point. You
can write perfectly maintainable procedural, functional, or object-oriented
code. If you're authoring something brand new you have to approach it with a
complete understanding of all the moving parts. If you're not there, make a
prototype, wait a few days, then go through and re-read it. Anything you don't
understand is something nobody else will the first time they approach your
code base. Come up with ways to be explicit and to communicate clearly what
the intent is. Try to anticipate what things people will be changing often and
make those easy things to change. Remember that it's about conveying a
representation, not a deep understanding. You want to represent your
understanding of the problem space to someone who doesn't have the same level
of understanding as you.

~~~
commandlinefan
> It's really silly to have wars around programming paradigms. There are only
> a few principles

Well, you say we're having wars around the programming paradigms, I say we're
having "spirited debate" around the principles :). I've been working mostly in
Java for the past 20 years or so, and I can't help but observe that most
people, when they try to put together a Java application, default to a sort of
design that looks an awful lot like old Cobol programs did: they have a
"datatype" generator (usually automated from XML schemas) and a slew of
"utility" classes with mostly static functions that have mostly static data
that operate on these datatypes, and as little class instantiation as they can
possibly get away with. I've seen this same basic architecture repeated many
times across four different employers in two decades. It's always a lurching,
monolithic, untestable behemoth that never works reliably and resists any
attempt to change. In talking with the original designers, it's clear that
there were no principles behind the design besides "it still doesn't work, how
do I get this thing to work". If there were clear and adhered to principles
like automated testability, you'd end up naturally with an OO (or even better,
FP) type design.

~~~
mypalmike
Interesting. I guess I've been more fortunate. Most of the Java code I've
worked with has involved reasonably well thought out classes. For me that
mostly means I can read and understand parts of the codebase in isolation.
There are usually a few piles of sometimes ugly utility classes and the
occasional mess of deeply nested inheritance that nobody wants to touch. When
the latter becomes painful enough, someone usually decides to refactor it,
which is often not as hard to do as everyone fears.

It seems to be improving in the last 5-10 years, as most practitioners have
found that both of these eyesores can be reduced. DI (sometimes messy itself,
but it can be done cleanly) tends to make people rethink those utility
classes, and shallow inheritance is now favored, with a focus more on
interfaces and composition.

------
adamnemecek
I've been recently writing a lot of code in the ECS style
[https://en.wikipedia.org/wiki/Entity_component_system](https://en.wikipedia.org/wiki/Entity_component_system)
and I'm a fan. It allows me to concentrate on shit without getting distracted.
Code tends to be clustered together with other related things as opposed to
having to jump between files and classes and trying to figure out what's
getting called.

The only downside is that it's not as popular in main stream and there aren't
like ECS first languages.

However I believe that there is a reason ECS has been dominating in the game
industry for the last 20 years. It's very flexible, fast and works nicely with
the GPUs.

~~~
rectang
My impression of ECS from a distance is that it is antithetical to data-
hiding.

That there are good reasons in terms of memory layout to locate all of those
values which would have been member variables in large arrays so that external
functions can iterate over them efficiently, but that in so doing, private
members become impossible.

What am I missing?

~~~
neutronicus
You aren't missing anything. But you also aren't asking yourself the meta-
question "what is the point of data hiding?"

The answer to that meta-question is "to preserve assumed relationships between
data." `std::vector` stores a capacity, and that capacity damn well better
correspond to the length of the last array it allocated or you will get a
segmentation fault. So you hide it.

The OO thing that ECS is pushing back against is a tendency to group things
together solely because they are in one-to-one correspondence, when they are
otherwise unrelated. In a game, there is no constraint that needs to satisfied
between a player's HP and their position on the map. So why "hide" this data
together in an aggregation?

If you want to permit only a few functions to operate on your big array 'o
stuff, you can always do something like

    
    
        class sensitive {
          static allowed_operation(sensitive& s, context& c);
        private:
          double here[N];
          int be;
          bool dragons;
        };
    

Similarly, if you need to do some really complicated computation on everything
about a player, you can define Objects as aggregations of indices into the
arrays 'o stuff (where typically people think of an Object as a big-ass struct
of structs, often implicitly due to Inheritance), where the data you care
about hiding is not the stuff from the array, but the indices:

    
    
        class Player {
          GetComponent() { return arrComponent[index]; }
        private:
          size_t index;
        };

~~~
rectang
Thanks for the examples shoehorning data hiding into ECS! They are revealing
even if only appropriate in esoteric circumstances.

> _But you also aren 't asking yourself the meta-question "what is the point
> of data hiding?"_

My answer to that question has always been that data hiding is necessary for
the sake of modular independence in large systems. This is a principle which
applies across all of engineering, not just software — see the "starter motor"
example elsethread.

Implementation details need to stay hidden so that you need only concern
yourself with local effects when making changes — instead of needing to keep
the entire system in your head because any change might impact any tiny detail
anywhere at all.

Nevertheless, I agree that in some circumstances it makes sense to expose the
data structure as an API, and that ECS offers a compelling approach and set of
conventions as to how you would go about that.

~~~
dpc_pw
> My answer to that question has always been that data hiding is necessary for
> the sake of modular independence in large systems.

Yes. But not on a granularity of every object. Just like you don't put a
padlock on every pocket, to protect your left hand from grabbing stuff from a
right pocket.

Hiding data has a real cost, just like inventing abstractions and interfaces
for every tiny thing. That's the core reason why OOP-software is so bloated
and always feels so "heavy".

The right granularity is much coarse: modules, API layers, data-stores. Much
closer to service in "micro-service", than "object".

------
beeforpork
Interesting stuff, but I really miss explanations:

> It (OOP) made it possible to develop much larger programs than before, maybe
> 10x larger.

Why?

This contradicts my intuition, because the biggest problem is that OOP focuses
on state changes (of objects). State change means complexity, so formally, you
need to understand the set of all possible states to reason that the program
is correct. This is an exponential problem in the number of pieces of state,
unfortunately. So how would this enable larger programs? What's the reasoning
here? It would mean some even more potent mechanism of OOP makes more state
more manageable. The structuring of data does not strike me as that potent --
nice, yes, but, well -- please explain!

> OOP provides a way for programmers to organize their code.

I think it is primarily about organizing data. The organising of the
corresponding code follows. When Java emerged, it did not even have functions
-- every piece of code had to be attached to data. In SmallTalk, you could
change the definition of the global 'true' or even extend bool to have three
values -- definitely data centric, and introducing unmanageable state...

> 100% pure functional programing doesn’t work. Even 98% pure functional
> programming doesn’t work.

I believe this. Well, intuitively, I'd say no paradigm works well when applied
100% pure. It would still be nice to see examinations of this or reasoning or
proof or at least examples.

~~~
Nursie
> the biggest problem is that OOP focuses on state changes (of objects). State
> change means complexity, so formally, you need to understand the set of all
> possible states to reason that the program is correct.

This is just not true. In fact that's the whole point - you encapsulate state.
It becomes easier to reason about.

When I write a webserver, do I need to know or care about the internal state
of the classes dealing with the TLS protocol? Or do I encapsulate consistency
and correct behavioir at that level and then use the interface elsewhere?

I don't need to know everything at all layers.

------
hota_mazi
> I hardly ever write classes anymore; I write functions.

I've never understood why so many people insist on this false dichotomy.

I do both.

These past years, I feel that I've been writing much better functions as my
understanding of more advanced concepts has ramped up, thanks to functional
programming and the emergence of things like Rx and coroutines, but once I
have these functions, I am extremely happy to be able to organize them in
classes and modules and leverage various forms of polymorphism to make my code
base flexible and easy to maintain.

~~~
choward
> I've never understood why so many people insist on this false dichotomy.

> I do both.

Are you talking about methods attached to your classes or functions that are
at the root/module level? If it's on a class I would argue it's not a function
since it takes input that's not just the parameters to the function. If it
doesn't access properties of the class, then whey does it need to be attached
to the class?

For example, let's say there is a function that takes instances of two
different classes and returns an instance of another class. If you want to
move that function to a class, where does it go?

> once I have these functions, I am extremely happy to be able to organize
> them in classes and modules

I understand the organizing into modules and being able to create
interfaces/types, but why classes?

------
tgflynn
> Object oriented programming, for all its later excesses, was a big step
> forward in software engineering. It made it possible to develop much larger
> programs than before, maybe 10x larger.

If that statement is true then OOP was surely the most important advance in
software development since the original higher-level languages, such as
Fortran, were developed.

My experience agrees with the author's stance that OOP provides significant
leverage in developing complex software. What I don't understand is why he
then falls into the current fad of seeking to abandon it.

Surely if OOP has been as successful as the author claims then what's needed
is not to replace it with another paradigm that rejects all the core
principles of OOP but rather an approach that combines the good aspects of OOP
with other paradigms allowing the programmer to use the tools best adapted to
the problem at hand.

~~~
Aloha
No one ever seems to as if an ever larger program is a good idea, the fact
that the tools enable it, and that its the way thats taught currently still
does not answer the question of 'goodness'.

------
tylerjwilk00
I don't understand the modern hate for OOP. Perhaps a bad mentor or
inheritance nazi. Inheritance can be thrown in the trash but otherwise OOP is
just a collection of related functions where you can have shared data to work
with while not polluting global scope. I don't see how it's so oppressive
unless if you're trapped in Java land where OOP is inescapable as opposed to
dynamic languages where you can sprinkle in your OOP with other coding styles
as needed.

~~~
logicchains
A relevant quote from Joe Armstrong, creator of Erlang:

"the problem with object-oriented languages is they’ve got all this implicit
environment that they carry around with them. You wanted a banana but what you
got was a gorilla holding the banana and the entire jungle."

In a nice procedural or functional program, when I need to do something I just
write a function that takes the relevant arguments. From the function
signature, I can reason about the function, that it only depends on and
interacts with those things I passed in. It also means I can reuse that
function anywhere else in the code I happen to have some of those things that
are the function args.

In the OO approach, even if a function only needs to access a few member
variables, I still pass in the whole object every time I declare a member
function. This makes the function harder to reason about, as the body of the
function could touch any member of the class, and if it has other classes as
members then it could rely on any of them too. It also makes the function much
harder to use, as even if it only needs X, Y and Z, calling it requires
creating a whole instance of that class of which the function is a member,
which may require a bunch of other stuff.

In that sense, the OO approach introduces unnecessary coupling, between the
logic in the body of a member function and the class members not used in that
function. Because I can't use that function's logic without creating instances
of the members not used in that function.

------
Animats
There's nothing wrong with objects. Hierarchies of objects, though, ran into
the "is-a" problem. Multiple inheritance has even worse problems, especially
around order of initialization.

Now that people are mostly over inheritance for the sake of inheritance, it's
not so bad. Often, you want to have one class be a member of another, rather
than a child of it.

This brings up a common problem. If A is a member of B, B sometimes wants a
back-reference to A. This is hard to set up in some languages. It should be
made easy, because it removes much of the need for inheritance. For Rust,
especially, it's a special case of Rust's back-pointer problem, the one that
makes doubly linked lists and trees with backpointers hard. Not having this
encourages the creation of structures with too many data elements.

------
asplake
“Pretending X never happened” might make an interesting series. Things that we
no longer adhere to (visibly at least) but changed us in ways we can still
appreciate

~~~
qubex
A natural question that arises in this context is: “do those who join the
field _after the consensus to forget X ever happened_ behave any differently
than they would have _if X really hadn’t happened_?”

------
inopinatus
The author seems to equate OOP with “writing classes”. So I have to suggest
that OOP never happened for them in the first place, at least not in the sense
I understand it viz. messaging, encapsulation, and late binding. The design
discipline that falls out of this is, for me, domain modelling, not “writing
classes”, and the value I’ve obtained being thereby a better fit between the
structure of code, and the purpose of that code, with all the opportunities
for validation and comprehension by domain experts that follows.

That said, I do agree with the sentiment that multi-paradigm languages offer
higher developer productivity than purist languages. I’d still have procedural
style in fourth place, behind OO, FP, and Logic.

------
S_A_P
Maybe its because I write small software or customize existing mature
Enterprise apps. I just dont care about the dogma so much. I think there is a
lot to like about OOP. There is a lot to like about functional code. I use
both quite a bit with C#. Dont get me wrong though, Im glad people do think
about this and try to make software paradigms better. I just dont find that I
am overly hampered with current tool sets to worry about this stuff.

Most importantly, I think this is just all talk. This is John D. Cook just
spitballing with a contemporary and debating things. I bet the other person in
the conversation is probably not all that convicted of his argument and is
throwing an exasperated take out there...

------
pava
I was hoping this post would be about what the world (and our programs) would
look like if OOP never happened. But an interesting and quick read
nonetheless.

------
juped
OOP never had one definition anyway, besides maybe "your language has a
'class' keyword" (although some don't!).

~~~
bussierem
and Haskell has a "class" keyword, so I guess there goes that option too

------
leephillips
Reading some of the comments here reminds me of an extremely popular language
that was designed from the start to encourage the use of a kind of class
inheritance. People built complex things with this language, using deep layers
of nested inheritance, as was intended. Then they began to discover that, when
your project reached a certain level of complexity, it was excruciating to try
to figure out what was going on, and impossible to change anything without
breaking eight other things far away. New, powerful features are added to this
language every year, and are embraced by developers, but they have learned to
tame it by avoiding its inheritance abilities, or using them very sparingly.
They have even developed novel ways to use its power while largely avoiding
inheritance—systems that have their own conventions and terminologies invented
by end developers and not envisioned by the language designers.

I’m talking about CSS, where class inheritance is called the “cascade.”

------
29athrowaway
To make my point, let's go back to the very beginning of programming:
plugboards.

To create a program you used cables on a plugboard. You had a general purpose
computer that can run multiple programs, but in other to switch programs you
had to spend significant time setting up cables. A small change in the program
may also result in the same thing.

So then we moved to stored program computers, so that we don't have to do that
anymore, and the global consensus is that it was a good idea, because it saves
time.

Then, we had programs in bare machine language. But that poses various
challenges:

1) Having to keep track the state of memory and registers is difficult. So
programming languages were created to create an abstraction over them in the
form of variables.

2) Programs were full of jumps that became hard to track and maintain at
scale. Structured programming was created to create an abstraction over jumps
in the form of control structures (sequence/selection/iteration/recursion).
Procedural programming was created to group statements into reusable
procedures and functions.

3) Then, having variables around became hard to maintain as well. So then
structures were created as a way to group variables that are used together.

But then, people understood that some procedures and functions are coupled
with structures, and that some structures are supersets of other structures.
And that's how OOP was born.

In this mindset, I do not think OOP is a bad idea. I also do not think that
the natural consequence of OOP is bad software. The problem is how the
paradigm is used, not the paradigm itself.

The problem people are facing now is shared mutable state, which is not only a
maintainability problem, but is also problematic in multithreaded software.
Functional programming is a viable solution to address that problem.

NOTE: edited based on suggestion, since apparently I got some concepts mixed
up.

~~~
Jtsummers
Just a note, structured programming wasn't about structures in the data sense,
but in the logic sense. Organizing program _logic_ in a way that was more
structured. Using a pseudocode:

    
    
      function unstructured_summation (int lower, int upper):
        int sum := 0;
        label loop:
        sum := sum + lower;
        lower := lower + 1;
        branch_neq lower, upper, loop;
        return sum; // this is assuming there's even a return concept
    

This is short and simple, so that goto or branch is fine here for reasoning
about. But it doesn't scale very well [0]. Structured programming discouraged
these ("discourage" being a scale from: don't use unless it makes sense to
don't use at all) in favor of other constructs that were more clear in their
intent and easier to reason about (particularly in the days when you couldn't
write a piece of code and test in in 10 seconds or less).

[0] I once inherited a Fortran codebase that was essentially written in an
unstructured style, many gotos and computed gotos. The guy who wrote it (yes,
singular) over a 20 year period knew exactly what it did. But that didn't help
me, because he was 70 and retired with a nice pension. The whole thing had to
be rewritten using the original as a reference for testing because it was
unmaintainable.

------
ken
This is all awfully vague. Accordingly, all the comments are just talking past
each other.

What does "85% functional" mean? How exactly do you measure that?

For that matter, what does "OOP" mean? PG published a note from Jonathan Rees
[1] which lists 9 possible (pieces of) definitions.

"Because OO is a moving target, OO zealots will choose some subset of this
menu by whim and then use it to try to convince you that you are a loser."

That's roughly what I'm seeing here, yes.

[1]:
[http://www.paulgraham.com/reesoo.html](http://www.paulgraham.com/reesoo.html)

------
luord
Funnily, ever since I decided to follow the solid principles and striving for
clean architectures in general, I've been writing _more_ classes. Not that I
can in my current job, being go, but I make do.

Why can't people let paradigms be paradigms instead of trying to turn them
into religions? They're approaches that might or might not be well suited to
different problems; yet so many people try to make the problem fit the
paradigm they prefer instead of the other way around.

This article is thankfully more nuanced than most on this subject, though.

------
cafard
Before I ever dabbled in what was called "object-oriented" languages, I was
happy enough to

<include stdio.h> ... FILE myfile;

Isn't myfile really an object?

------
ChrisMarshallNY
I've actually run into a couple of people with six-digit SO scores that
profess to not know about such concepts as polymorphism.

My jaw dropped. Maybe they were being a smartass, but that's just crazy talk.

Like all dogmatic stances, people just take a "My Way or the Wrong Way"
approach.

Most of my engineering is a hybrid of classic ( _read: "old"_) techniques, and
new, "cutting-edge" techniques.

I write about my outlook on that here:
[https://medium.com/chrismarshallny/concrete-
galoshes-a5798a5...](https://medium.com/chrismarshallny/concrete-
galoshes-a5798a55af2a)

(Scroll down to "It's Not An 'Either/Or' Choice").

One day, AIs might replace us poor coding schlubs. At that point, we can
assume that everything will revert to Machine Code.

Until then, OO is a great way for humans to grok the complexity of software
development.

I will always keep refering to this classic joke, when I encounter inflexible,
dogmatic thinking:
[http://www.solipsys.co.uk/new/TheParableOfTheToaster.html](http://www.solipsys.co.uk/new/TheParableOfTheToaster.html)

~~~
barrkel
I'd say all programmers of any significant experience know the concept of
polymorphism, but it's quite possible to know the concept without knowing the
word.

They might be using discriminated unions, or type-distinguishing enums on rows
or structs with a superset of attributes, but the concept is hard to avoid
when trying to generalize behaviour over heterogeneous data.

~~~
pinopinopino
Good point, you see this so often in C code bases. Didn't even think about
that.

------
gnufx
One take, from Robin Popplestone, long ago in comp.lang.functional: “… it does
seem to me that … OOP represents the discovery by the mainstream community
that it is a good idea to associate code with data, but, since they still
don't know how to do closures, they have bodged it.”

------
m463
I don't know much, but I think:

modularity is VERY good

(I think most of the shortcomings of C are a lack of modularity)

functional programming is good (eliminate side effects and people and
compilers can assume without making an a * * of u and me. Should not be
required)

classes are good

(frequently it's the same, but slightly different)

multiple inheritance is bad

(there are other ways to do it)

------
irrational
> I hardly ever write classes anymore; I write functions.

This speaks to me so much.

------
mrkeen
I wish the author defined the term OOP somewhat. It really means different
things to different people.

FWIW I like Bob Martin's definition (encapsulation, inheritance,
polymorphism).

------
lazulicurio
This article and the associated comments reminded me of 'Object-Oriented
Programming and Essential State'[1] which was posted on here about half a year
ago. I'm just going to copy from a thread that I felt got to the crux of the
matter with OOP.

\----

One thing I hate though, in a language like Java, is when I see "utility"
classes with static methods which take objects as parameters, then perform
some calculation based entirely on the state of the passed in object. In my
opinion, if the object can reason about its own state and return an answer
based on that reasoning, that method/logic should be in the object, not
elsewhere.

Another one that bothers me are these transformation static methods which take
type A and return type B based on nothing but the state of type A. The
languages were talking about, C#/Java already provide a facility for this
called a constructor.

If the development approach is going to completely remove operational methods
from data types then I'd think long and hard about using a language which
supports this instead of a language, like Clojure, which does not.

\----

> that method/logic should be in the object, not elsewhere

Only if it's involved in preserving some object invariants, or accessing parts
of its state that are abstracted away in the public interface. Otherwise,
you're breaking encapsulation by putting some logic in the object that doesn't
belong there, and making it hard to change the implementation later.

> The languages were talking about, C#/Java already provide a facility for
> this called a constructor.

There are sensible reasons to avoid using constructors for this, at least in
the general case.

\----

> Only if it's involved in preserving some object invariants, or accessing
> parts of its state that are abstracted away in the public interface.
> Otherwise, you're breaking encapsulation by putting some logic in the object
> that doesn't belong there, and making it hard to change the implementation
> later.

I've been pondering this thread all day, and I think this is the crux of the
issue. Ideally, classes would only encapsulate state and you'd use
namespaces/modules to encapsulate functionality. But most Java/C# OOP examples
use classes to encapsulate both state and functionality, which gets you stuck
in the morass that the article discusses.

\----

[1]
[https://news.ycombinator.com/item?id=21238802](https://news.ycombinator.com/item?id=21238802)

------
pizlonator
85% FP + 15% BASIC = JavaScript

~~~
schwartzworld
haha. JavaScript bad

------
headlamp
libc has stood for quite a while without OOP. I think a mix of fp and
imperative is magnitudes easier to mull about than oop. imo fp and imperative
semantics are much clearer than oop.

------
snidane
OOP in the sense of encapsulation, inheritance, polymorphism is an academic
exercise for students to think about organization of data and functions into
hierarchies. Objects in this case were mistaken for modules. This is only
usable by students and librarians.

OOP in sense of message passing is just a programming language implementatiin
technique used in Smalltalk which lets you handle functions of multiple
arguments with ease. You basically 'pass' each argument as a message to the
receiving object.

That is obviously just a hack to make that feature of nontrivial function call
work. Real message passing obviously has to be asynchronous to even resemble
concept of real life message passing. Erlang and Actor model in that case are
pure models of message passing based OOP.

Somehow all of these OOP systems are just distractions from solving problems
using computers because you can represent your problem using pure 'data' such
as numbers or strings and composing them to bigger structures with well
defined operations - such as lists, dicts and tables and so on. Objects can be
purely abstract thing in your mind or documented in code comments.

Imperative programming obviously doesn't scale beyond one machine because of
von Neumann bottleneck and suffers from unmanageable global state. That
doesn't mean OOP is the solution.

Functional programming is of two kinds - statically typed with powerful type
systems - which are systems with very limited field of use and high
development cost. Unfortunately unusable for real life problems because of
lack of intuition.

Other less strict functional languages fare better, but used naively suffer
from von Neumann bottleneck too. Unless they are used in a language oriented
programming paradigm, whereby you use your language only to construct a higher
level language DSL) which can abstract over more than just a single machine.

Good examples are Lisps and their ability to define other languages such as
Prolog or SQL or Linq.

The only way to make parallel computing possible is to use data parallelism
instead of task parallelism so data oriented languages such as sql or datalog
are the future since Moore's law reached its end. This poses a strong
constraint on design od programming languages.

SQL, while not suffering from single processor bottleneck, unfortunately sucks
as a language because it is not programmable and nondeterministic in
generating physical plans.

Which offers a great opportunity for programming language enthusiasts - build
a programmable sql or datalog like language because other paradigms reached
some fundamental constraints and hit a dead end.

Or come up with a new paradigm.

------
foobar_
Program = Data Structures + Algorithms

Most OO code is poorly designed, slow, error prone and a memory hog. Every
object you initialise wastes so much memory. OOP is not going to be there
forever. It is a failed paradigm. Hopefully it will be replaced with something
more efficient, faster to develop and something with more provable
correctness.

Is an object a data structure, an actor, a module, a knowledge frame ?

A state machine can rightfully be called state + behavior. Sadly no
programming language has made hierarchical state machines a first class
feature to support with syntax despite it being software engineering unlike
what OO bros claim.

~~~
Nursie
For a failed paradigm, there sure is a lot of perfectly successful software
written using it.

~~~
foobar_
Successful as in slow to use, poorly designed, difficult to extend and a
memory hog. You can't even reuse code between rails1 and rail2. OO is a joke.
Pure OO languages like Java and SmallTalk are already failures. C++, Scala,
Kotlin are not OO languages. Design patterns are not engineering, let alone
architecture.

The only success for OO has been UI development. It's a failure everywhere
else. OO databases are a failure. ORM is a failure. OO based distributed
computing has failed, we use REST. OO based design is a failure, no one uses
UML.

OO based architects are a waste of time and money. You can replace all OO
based architecture nonsense and patterns with code written in Go.

~~~
Nursie
Java is not pure OO, neither is it objectively any sort of failure, being one
of the most widely used languages out there.

As such I don't think the rest of your post is worth consideration.

~~~
foobar_
Java Web Start is such a big success.

------
saagarjha
Avoiding OOP is hard: even those using “non-OOP” languages like C often end up
doing some sort of bad OOP where the first parameter is some sort of explicit
this pointer. I would tend to agree with the author’s conclusion: it’s often
not worth trying to remove all instances of a paradigm from your code,
especially if you’re working in a multiparadigm language. If you try to stay
away from some of the more problematic aspects (inheritance) while retaining
the strengths (encapsulation) I think you’ve done a good job.

~~~
dktoao
I agree with you that the biggest strength of OOP is encapsulation that is
easily accessible. As for inheritance, I think it is also a good abstraction,
but in an very limited number of circumstances. In 10 years as a professional
developer I have only ever implemented something using inheritance once. I
think the problem with inheritance is that it is far too encouraged, and
people fit it into problems which it is unsuitable.

EDIT: typo

~~~
wvenable
> In 10 years as a professional developer I have only ever implemented
> something using inheritance once.

If you build a library or framework, you'll use it more. There are often
plenty of literal "is-a" relationships that don't exist as often in
application code. So you probably benefit from inheritance in OOP quite a bit
even if you aren't building those relationships yourself.

~~~
dktoao
And now that I think of it, what I was building was a data visualization
framework that users could create "plugins" for to handle many disparate types
of data. So yeah, frameworks are a great example of cases where inheritance
shines, but is rather niche and not many people work on them day to day.

------
zozbot234
I'm not sure I agree on this. OOP is really about implementation inheritance
and ad-hoc polymorphism based on the same, and it's pretty clear by now that
these are not good ideas because of e.g. the fragile-base-class problem, that
basically involves violations of modularity.

Simpler object-based paradigms and languages, which feature "classes" and
"objects" for encapsulation and data abstraction, plus maybe some interface-
only subtyping, are used all the time and work quite well with functional
programming concepts.

~~~
acdha
The main lesson I’ve drawn is that the real problem is dogma: OOP works fine
if you aren’t trying to follow some One True Way™ rather than adjusting based
on your needs, problem domain, and resources. The same is true of most other
styles: FP is prone to getting a certain brand of immutability fanatic who
will cause problems until they learn more about computer architecture and get
a more nuanced position.

In every case, focusing on dogma avoidance and having reasonable technical
debt levels seems to be far more important than the specific language and
paradigm.

~~~
smabie
People think that with FP you end up just copying a bunch of stuff constantly.
But in my experience, that's rarely true: structural sharing minimizes copying
so that the performance is usually on par with in place mutation. In fact, in
some situations, the immutability guarantees make the program faster than the
mutable version.

Moreover, advanced compilers (like GHC and the OCaml compiler), can often turn
functional code that allocates and copies into in-place, GC-free code.

~~~
acdha
Remember, I was talking about dogmatic silliness - the kind of thing where
someone thinks it’s some kind of moral failing to use Haskell’s mutable
structures, even though they’re there for a reason.

My point was that this seems like the same mindset even in two fairly
different domains, and that we’re prone to talking about it as a technical
problem when it’s really more of social one.

