
Goodbye, Object Oriented Programming - ingve
https://medium.com/@cscalfani/goodbye-object-oriented-programming-a59cda4c0e53
======
overgard
Programming paradigms are a lot like political parties -- they tend to lump a
lot of disparate things together with a weakly uniting theme. You don't need
inheritance for encapsulation to be useful, for instance.

The problem is, sometimes you agree with only a small part of the platform.
None of these things individually are terrible ideas if tastefully applied,
but it all gets clumped together into one big blob of "the right way to do
things" (aka object oriented programming). I blame languages like Java for
selling certain ideas as The Right Way, and building walls that intentionally
prevent you from using other techniques from different schools of thought
("everything is an object, no you can't write a function outside of a class").

I think the functional paradigm has a lot of good ideas too, but in my
experience they're just as annoying if they're strictly and tastelessly
applied in the same way OOP principles often are.

Don't be a "functional programmer", just take the ideas that are useful.

I tend to prefer languages and tools that adopt good ideas without promoting a
single specific way of thinking.

~~~
eutectic
Except that there is a real advantage to using pure functional programming;
being able to easily prove theorems about your code and understand different
components in isolation. There is a reason why the majority of proof
assistants are implemented as functional languages.

Most functional languages even give you ways of modelling imperative code
(e.g. monads) in a way which hardly sacrifices expressiveness.

The real problem with OOP is not that it forces you to do things a particular
way, it's that what's new about it (privileging the first argument of each
procedure, inheritance, lots of hidden mutable state) is bad, and what's good
about it (encapsulation, polymorphism) is not new.

~~~
seanmcdirmid
> Except that there is a real advantage to using pure functional programming;
> being able to easily prove theorems about your code and understand different
> components in isolation.

And how often does that occur in practice for most of the programs people
write? Even the most hardcore Haskell programmer isn't going around proving
theorems about their modules beyond what the type system can provide for free
(and that is true of statically typed OO also).

> The real problem with OOP is not that it forces you to do things a
> particular way, it's that what's new about it (privileging the first
> argument of each procedure, inheritance, lots of hidden mutable state) is
> bad, and what's good about it (encapsulation, polymorphism) is not new.

Encapsulated state has its own benefits and drawbacks (having explicit
unencapsulated state limits reuse as they are reflected in type signatures),
and anyways, is not unique to OOP (any commonly used language sans Haskell
supports encapsulated state). The only way to get encapsulated state in
Haskell is World -> World, which would pollute all function signatures it
buried through (there is some good work in parametric effect systems, but its
still early).

OOP also is not very new, being formed from the culmination have patterns used
in the 70s, and is about the same vintage as FP.

~~~
sbmassey
Any sort of thinking about what the code does or does not do is essentially
proving theorems about it, is it not? It seems to me one does that all the
time when debugging.

~~~
seanmcdirmid
I don't think about coding that way. To me debugging is more like a Sherlock
Holmes investigation rather than a formal theorem proving process. I guess we
maybe work on different kinds of programs.

~~~
l_dopa
You're still, at some point, simulating the steps of some abstract machine in
your head to understand what the debugger is telling you.

The simplest case is replacing an expression with its value, given an
environment of lexical bindings that are apprent from the source program. Not
much investigation necessary. That's FP.

For OO code, you just need to keep track of a lot more context: the state of
the receiver of the currently executing method, its heirarchy of parent
classes, the runtime class of each object because late binding is pervasive.
Of course you can write code that doesn't use any OO features, but the
languages clearly aren't designed for it. See: any number of "functional C++"
articles.

And, of course, you can get the same kind of highly dynamic behavior in FP
languages by explicitly using open recursion, higher-order state and hiding
everything behind existentials. But very few codebases do that because the
vast, vast majority of the time just one of these features is enough to solve
a problem.

~~~
seanmcdirmid
OO context is internalized linguistically, via lots of metaphors. Those
metaphors can lie, of course, but your brain can apply abstractions to the
state of the machine to deal with its complexity, for better or worse. Ideal
FP debugging, which I don't think exists in practice, relies on ultimate truth
with equational reasoning realized through techniques like referential
transparency. In the ideal case, you just reason about the equation and there
are no hidden surprises, fuzzy metaphorical reasoning is minimized. In
practice, there is still plenty of metaphorical reasoning going on for any
non-trivial program as even explicit state necessarily becomes implicit as it
increases in quantity (our brains can't handle seeing everything even if it is
technically all in front of us).

OO thinking optimizes for the less ideal cases that are far more common. Ideal
FP has this ideal mathematical view of the world that rarely pans out in
practice, while less ideal FP just resorts to OO-style metaphors and
abstraction in practice. For the kinds of experiences I work on (heavily
reactive, lots of state, complex interactions), this works well for me, and we
are developing techniques to make it less painful (live programming). "Worse
is better", as RPG would say.

~~~
l_dopa
It sounds like we agree that lexically scoped immutable values are easier to
understand, either precisely or with fuzzy metaphors. I'm not sure what
"ideal" FP is or how it might have a certain "mathematical view of the world".
The features I mentioned are just a subset of semantics that every high-level
language programmer already knows, but are pointlessly hobbled in popular
languages.

Why should expressing something basic like a tagged union require a detour
through the quirks of a particular object system? Ditto for polymorphism,
modularity, etc, etc. Clearly we disagree on how useful objects really are in
practice, fine. Why not add these domain-specific features on top of a core
language with simple semantics? It worked fine for lisps. Luckily, after a
couple of decades of "everything is an object!" nonsense, that seems to be
where newer languages (Rust, Swift) are headed.

~~~
seanmcdirmid
You mean CLOS? This is exactly the context RPG coined "worse is better".

Languages are not so much a collection of features but mindsets. So
polymorphism, dynamic dispatch, subtyping, etc...do not define OOP so much as
they are leveraged by those languages to enable reasoning with names and
metaphors. Calling them just domain specific features misses the point like
talking about some dish only as the sum of its ingredients.

Tagged unions and GADTs are quite different in expressiveness and modularity,
I still remember the mega case matches used in scalac. Should one function
really be given so much functionality when a layered design with several
virtual method implementations would be much more amenable to change and
modular reasoning? Well, I guess it's a matter of how you view code.

~~~
l_dopa

      Languages are not so much a collection of features ...
    

If you want to define objects precisely, even just to have a language spec,
they are absolutely made up of sums, products, recursive types, etc. Whatever
useful metaphors one might have to work with objects doesn't change what they
actually are. If you give the programmer access to these building blocks, you
get ML.

    
    
      Calling them just domain specific ...
    

I meant that the particular way they're combined to get OOP is domain-
specific.

Again, I'm just asking: why mess with the basics? Why require encoding simple,
universal concepts in terms of an ad-hoc object system? You can still have an
object system on top, if you find that helps with the really complicated
cases, but why encode the parts of your code that _are_ simple in terms of
much more complicated, derived concepts?

------
millstone
Let me try to list the objections:

1\. Inheritance creates dependencies on their parent class

2\. Multiple inheritance is hard

3\. Inheritance makes you vulnerable to changes in self-use

4\. Hierarchies are awkward for expressing certain relationships

All true. But likewise, functions introduce dependencies on their arguments,
and data structures introduces dependencies on their fields. You must consider
your dependencies carefully when designing any software interface.

The task of software architecture is not to go around categorizing everything
into a taxonomies. Inheritance is just one tool in your software interface
toolbox.

5\. Reference semantics may result in unexpected sharing

This has more to do with reference semantics than objects.

6\. Interfaces achieve polymorphism without inheritance.

Interfaces long for inheritance-like features. For example, see Java 8's
introduction of default methods, or the boilerplate involved in implemetning
certain Haskell typeclasses.

~~~
ebbv
That is a good analysis. While I was reading this article all I could think is
"You wanted to do things in a bad way and then you learned how to do it the
right way and you don't like the right way?"

His entire problem seems to be he thought OO was a magic bullet he could do
whatever he wanted with and then he learned there was more to using OO than
the three concepts he cites at the beginning.

And this guy has supposedly been writing in OO languages for decades? What?

~~~
twic
> And this guy has supposedly been writing in OO languages for decades? What?

This is the bit i don't get. It's like he learned OOP in the '90s, when
everyone thought inheritance was rad and nobody had realised how terrible
mutating shared state was, and then fell asleep for twenty years. None of this
article, _none of it_ , has any relevance to how OOP is practiced by informed
people today.

~~~
flukus
The keyword there is informed. There are still plenty of shops that practice
OOP this way. I'm working at one right now and there are OOP horrors around
every corner.

Unfortunately, they seem intent on adding more mutable state rather then
eliminating it.

~~~
vikiomega9
Do you have anecdotes you can share? :D

~~~
flukus
Quite a few, but the worst would be the custom logger. I got lost trying to
trace the inheritance graph. which spans projects, and the dependency graph,
loggers within loggers within loggers. I gave up when I ran out of space
trying to sketch the relationships in my notepad.

Being encapsulated means you don't actually have any control over the logger,
or the threads it spins up. Creating a logger for a new app requires
inheriting from a application logger (which is already 3 or 4 layers deep in
the inheritance hierarchy.

The distinction other loggers make between the log interface and the appenders
are non-existent. If you want to log to a new source (say the event log) you
have to add a new layer to the inheritance tree.

Then there's the fact that it's handling the application state, in an "on
error resume next" kind of way. And this is a global state, so don't even
think about multi threading.

Naturally, actually accessing the logger is done via a singleton.

It causes more problems than it helps resolve, and the ones it causes
naturally don't have an diagnostic information available.

There are plenty of others, but it's hard to top this tour de force of OO anti
patterns. If only there was some free and stable alternative...

~~~
GFK_of_xmaspast
> If only there was some free and stable alternative...

There's no shortage of good-enough loggers out there for major languages.

~~~
pkroll
The ellipsis in this case appears to me to be a sarcasm indicator, not a real
wish.

------
Illniyar
I think the functional vs OO debate is being done with a very narrow point of
view.

Functional came before OO and there are reasons why it became much more
popular- it had much better, easier and simpler solution to the most common
problems of the 90's and early 2000's, namely handling GUI and keeping single
process app state (usually for a desktop app).

It fares much worse in today's world of SaaS and massive parallel computing.

Frankly I think the discussion will be much better if we debate the merrits of
each paradigm in the problem domain you are facing, rather then blindly
bashing on a paradigm that is less suited to your problem domain.

For instance I have yet to see an easy and simple to use (and as such
maintainable) functional widget and gui library.

~~~
DanWaterworth
> For instance I have yet to see an easy and simple to use (and as such
> maintainable) functional widget and gui library.

Like react?

~~~
quacker
My impression of react is that it's very object oriented. Defining reusable,
stateful components is classic OO design. Now, I don't have much experience
with React so maybe someone can explain why it should be considered
"functional".

~~~
acemarke
React is an interesting mix of functional and OO. It's OO, in that the primary
approach for defining components is class-based, and components have state and
lifecycles. It's functional, in that the render methods are expected to be
pure functions based on component state and props, and simply output a
description of what the UI should look like as a result. Also, as a whole,
React definitely pushes you to view the system in terms of composition, state
transformations, and pure functions, rather than imperative "toggle this, add
that, update the other thing".

~~~
TheOtherHobbes
Maybe we need a new tag: OOF.

Or possibly FOO.

------
ryanmarsh
_The venerable master Qc Na was walking with his student, Anton. Hoping to
prompt the master into a discussion, Anton said "Master, I have heard that
objects are a very good thing - is this true?" Qc Na looked pityingly at his
student and replied, "Foolish pupil - objects are merely a poor man's
closures."_

 _Chastised, Anton took his leave from his master and returned to his cell,
intent on studying closures. He carefully read the entire "Lambda: The
Ultimate..." series of papers and its cousins, and implemented a small Scheme
interpreter with a closure-based object system. He learned much, and looked
forward to informing his master of his progress. On his next walk with Qc Na,
Anton attempted to impress his master by saying "Master, I have diligently
studied the matter, and now understand that objects are truly a poor man's
closures." Qc Na responded by hitting Anton with his stick, saying "When will
you learn? Closures are a poor man's object." At that moment, Anton became
enlightened._

[http://people.csail.mit.edu/gregs/ll1-discuss-archive-
html/m...](http://people.csail.mit.edu/gregs/ll1-discuss-archive-
html/msg03277.html)

~~~
qwertyuiop924
That's true, but really doesn't have much to do with this discussion

------
skywhopper
OO is just a way of organizing code. You can simulate quite a bit of it in
non-OO languages. But a lot of the problems are universal.

OO lets you abstract away a lot of detail, but locks you into some rigidity
that doesn't map perfectly onto the real world. It's a leaky abstraction. But
so is _everything_ real that we attempt to represent in a computer or in any
formal system. Gödel proved this 85 years ago.

Code reuse is entirely possible with OO. The practical difficulties of code
dependency management are not unique to OO. Anyone who's ever developed
anything non-trivial in Node has seen how insane the dependency tree can get.
Every language and platform has its own version of this problem and its own
solution. From Windows DLL hell to Ubuntu Snap, from Bundler to Virtualenv,
this problem transcends any particular style of programming.

It's good the author is skeptical of the promises of functional programming,
but the total rejection of OO concepts as useless reveals that ultimately the
author didn't really learn anything useful. The author fails to address how
abandoning OO solves any of the problems he claims to have. "Ew, that's
gross!" is not a useful analysis.

------
saosebastiao
For some odd cosmic anomaly, I learned programming almost exclusively in
functional programming environments. My first language was R, and subsequently
learned Scheme, Clojure, Ocaml, Haskell, and currently program primarily in
Scala. Having never gone a through the OOP trend, and realizing that my
current programming experience happened to be de jour gave me some undeserved
confidence. So much so that I would regularly make fun of all of the Java
drones at my work for their insistence on using such an inferior paradigm.

Then due to some directions I was taking at my job, it became very valuable to
run millions of simulations of warehouse and transportation operations. After
months of pain, I discovered object oriented programming (luckily I didn't
have to abandon my language of choice to get it). Comparatively speaking,
there wasn't a functional design pattern I could find that could come anywhere
close to the simple elegance of OOP for modeling people, vehicles, warehouses,
etc.

It's almost as if different ideas have different virtues in different domains.

~~~
rdtsc
> it became very valuable to run millions of simulations of warehouse and
> transportation operations. After months of pain, I discovered object
> oriented programming

That's interesting. I think simulation specifically is an area that fits OO
too well. It even fits the classic texbook example of a "Car is a vehicle,
which enapsulates and engine. Its state has speed and position. etc".

~~~
dboreham
Back in the day, when OO languages were beginning to be promoted in the late
80's the intro always began with a reference to Sumula 67 :
[https://en.wikipedia.org/wiki/Simula](https://en.wikipedia.org/wiki/Simula)

------
saticmotion
My biggest gripe with OOP is the Oriented part. If you design your entire
codebase around OOP you will run into architectural problems. Especially with
so-called Cross Cutting Concerns[0]. The way I tend to write code, is to just
start with my main function and write whatever procedural code I need to solve
my problem. If I start seeing patterns, in my data or algorithms, that's when
I start pulling things out. I have heard this approach being called
"Compression Oriented Programming", but I don't care much for what people want
to call it.

This approach doesn't mean no objects ever. But only when your problem
actually calls for it. Likewise you will also end up with parts that are
purely functional, data-oriented, etc. But they will be used where they make
sense.

On top of that I'm also using pure C99. It does away with a lot of the fluff
and cruft in other languages. In the past I used to try to fit my problems
into whatever the most fancy language features I was offered. Which cost me a
lot of time analysing. Now I just solve my problem.

Mind you, C is not a perfect language. There are features I wish it had. But
for my approach to programming it is the most sensible to use. Apart from
maybe a limited subset of C++ (such as function overloading and operator
overloading for some math)

[0] [https://en.wikipedia.org/wiki/Cross-
cutting_concern](https://en.wikipedia.org/wiki/Cross-cutting_concern)

~~~
pfultz2
> The way I tend to write code, is to just start with my main function and
> write whatever procedural code I need to solve my problem. If I start seeing
> patterns, in my data or algorithms, that's when I start pulling things out.

That is the same technique that Stephanov describes in 'From Mathematics to
Generic Programming'.

------
whack
Most of the problems he brings up are already addressed in major OOP
languages.

1) _Inheritance can be confusing and messy._

Yes, hence the advice: Prefer composition over inheritance. Instead of having
B inherit from A, declare an interface I, and have both A and B implement I.
If B wants to reuse A's functionality, it's free to do so through composition,
and not through inheritance.

There are some edge cases where inheritance is vastly simpler than composition
- mostly when the interface requires you to implement 20 different methods,
and there's only 1 method that you really care about changing. Using
inheritance here gets rid of a ton of boilerplate, but that's a conscious
choice you're making. If you don't like this, just revert to using
composition.

2) _Encapsulations can leak if you write buggy code_

Any program can break if you write buggy code. Not sure what the author's
point here is. In order to encapsulate your class carefully, either accept
immutable inputs, or make deep copies of them. If neither happens to work,
warn users that class behavior is undefined if they misuse it. This is what
every non-thread-safe class already does anyway: it warns users that if you
use them in a concurrent manner, things may break.

More importantly, when dealing with internal state that's created by the
class, make it private and ensure no one else can access it. This also serves
to encapsulate the internal implementation and algorithm from external users.

3) _Polymorphism is... not unique to OOP languages?_

Yes, using interface-based polymorphism is a good idea, and covers most of
what people need. How does this make the argument that we should never use OOP
languages?

\--------

The author brings up valid points about what to watch out for when coding in
OOP. If you read other books like "Effective Java," they bring up the same
points as well. But instead of acknowledging the benefits that come with OOP
as well, and teaching people how to avoid these pitfalls and write code the
right way, the author jumps to an extreme position that OOP languages should
be abandoned entirely. Can we please avoid this type of wild overreaction, and
pointless jumping from one shiny tool to the next, in a never-ending search
for a silver bullet that will solve all of our problems. Because let's face
facts: No such silver bullet exists.

~~~
jaegerpicker
Soo..We said never look for something better than Java? Or OOP? What if FP
gives us all the important things that OOP does and more? Why wouldn't we use
it?

Personally I want to learn from and use a language that supports as many of
the paradigms as possible, like Scala or Swift. Let me choose based on what I
need. That being said I'd much rather work in pure FP then OOP because of the
fact that most of the advantages of OOP can be achieved via FP and the inverse
is not true.

~~~
Shorel
Or C++.

Templates or generic programming can be argued to be another paradigm, and
quite powerful as well.

D supports that as well, along with purity checking for FP.

------
stepvhen
In other literature the answer to inheiratance is "composition" or
"components" rather than "delegate and contain." A nitpick, but I think it
better captures the meaning of the method.

Bob Nystrom wrote a very good chapter on composition in his Game Programming
Patterns book [1] and is worth reading if you want to program in the OO
paradigm.

[1]
[http://gameprogrammingpatterns.com/component.html](http://gameprogrammingpatterns.com/component.html)

------
EdJiang
Interesting. I almost though this was going to be an advertisement for Swift,
since I saw this exact argument in a WWDC talk.

Apple calls Swift a "protocol-oriented" programming language, and with the
addition of first class value types, tries to solve these problems in their
own way.

I'd definitely suggest people frustrated by the problems outlined in this post
to check out the Apple talk on protocol-oriented programming in Swift.

[https://developer.apple.com/videos/play/wwdc2015/408/](https://developer.apple.com/videos/play/wwdc2015/408/)

~~~
eyelidlessness
It's a great video, and clearly the design behind Swift has tried to address
many of the biggest problems with modern OOP. But Swift is by necessity a
multi-paradigm language that has to interface with existing OOP code.

If you're writing a Mac or iOS app, you're generally writing Cocoa with either
Swift or ObjC syntax. Cocoa is unbearably stateful. For a simple app, you not
only don't get to work with value types, you typically don't even get
functions or methods which return anything.

Obviously for something more complex, your non-UI code can absolutely be
written with the principles described in that video in mind, but if you try to
apply those principles to Cocoa APIs directly, you're going to be fighting
against its stateful nature constantly.

~~~
astrange
In ObjC everything is a "reference type" since it's a pointer, but most of the
data structures are immutable, and some of them are stuffed into tagged
pointers, so they really are values after all. (Try implementing that in C++!)

CoreAnimation and bindings make AppKit very slightly more functional, but not
really.

------
mk89
When I read such titles I feel sad.

In 2016 we are still talking about Cobol, which is spread in a relatively
niche market and considered as a pillar in fields like banking, how can the
object oriented paradigm be considered " past or even bad? It is the present
and will be the future for at least the next 20 years, considering the number
of billions lines of code. From a management perspective, such statements are
not strong enough to be justified.

I find this sort of articles to be just bread and butter for codemonkeys,
people who learn the most recent paradigm, technology or whatsoever and think
that it's the key to happiness, or people who read for the first time a book
like the ones from Bob Martin and feel they already know how to develop good
software - or poems, as mentioned somewhere in the book - and list the bad
things about other types of software architecture or design or whatever.

~~~
eru
Cobol is certainly considered bad.

~~~
mk89
While I think that you are definitely right and that nowadays Cobol would
probably not be the first choice for 98% companies out there, I think we
should also consider that according to TIOBE (
[http://www.tiobe.com/tiobe_index](http://www.tiobe.com/tiobe_index) ) Cobol
is one of the 20 most used languages in the world - and it's also thanks to it
that we can use banks efficiently. Can we all say goodbye to Cobol? Go and ask
those ones who program in such language - who make a s __* load of money with
it.

I personally could say goodbye to Assembly, as I only used it for learning
purposes, I could say goodbye to Perl, because I don't use it and I don't like
it, or I could say goodbye to Visual Basic - for other reasons. All these
languages are extremely powerful, they still do their job _today_ , although
some are too low-level for our everyday applications, some are just in a
process of replacement (like Cobol, of course). However, I tend to keep these
opinions for myself.

"Saying goodbye" in such cases is a strong statement, inherently
sensationalist and biased, therefore my criticism toward the title. In a field
like computer science, you can't and shouldn't use such titles. Such a title
shows already how biased you (the author) are.

~~~
eru
Oh, you can make great money doing Cobol. Demand is niche, but supply is even
smaller.

(My grandfather made very good money in the late 90s converting Cobol and
assembly programs in banks and insurance companies to y2k.)

------
kentt
This is just a rant. It's not about Object Oriented vs Functional. Perhaps it
could have been if had said how functional programming help these issues.

The summary of the article is programming is nuanced. You can attribute some
nuances to OO design.

------
maxxxxx
Let's wait for a few years and we'll see plenty of articles "Goodbye
functional programming". You can write good and bad stuff with OOP, you can do
the same with FP. There is no one-size-fits-all porgramming style.

~~~
jolux
You're making a false equivalence. Functional programming has a strong basis
in discrete mathematics that OOP simply does not. This makes it better suited
to accurately describing computation at a high level than OOP.

However, you probably arrived at this conclusion because you still see
programming languages as having intrinsic paradigms and those paradigms
meaning anything about computation.

The fact of the matter is that all programming languages are little more than
notation systems for algorithms. The best programming language is the most
accurate algorithmic notation system.

~~~
dkarapetyan
You mean lambda calculus? That model has plenty of shortcomings. Someone
recently pinpointed the problem for me. Complexity analysis is impossible in a
system that is inherently unaware of time and computational cost of transition
rules. Lambda calculus is inherently timeless (both in the theoretical sense
because of turing equivalence and the practical sense of being unable to
provide a proper framework for complexity analysis). See
[http://cstheory.stackexchange.com/questions/376/using-
lambda...](http://cstheory.stackexchange.com/questions/376/using-lambda-
calculus-to-derive-time-complexity).

~~~
jolux
So you list one shortcoming, and don't even cover the position that lambda
calculus is better suited as a foundation for a programming language despite
not being better at time complexity than a Turing machine would be?

[http://cstheory.stackexchange.com/questions/21705/what-is-
th...](http://cstheory.stackexchange.com/questions/21705/what-is-the-
contribution-of-lambda-calculus-to-the-field-of-theory-of-computatio)

My point is basically made here:

    
    
      This algebraic view of computation relates naturally to programming languages 
      used in practise, and much language development can be understood as the search 
      for, and investigation of novel program composition operators.

~~~
dkarapetyan
This is the usual response. I didn't say the algebraic approach does not have
advantages but the complexity analysis issue is always swept under the rug by
proponents of the algebraic approach. Operational and axiomatic semantics also
have their place and if the theoreticians haven't yet settled on "the one true
way" why is it that regular programmers think they have?

I agree that the search for novel composition mechanisms is a prominent goal
of PL theory and research but it is not the only one.

~~~
jolux
Fair enough. I'll agree that their is no "one true way" and contest whether
it's likely there is one, however I do think I understand the frustration some
theoreticians have with the mainstream of programming when their contributions
to programming language theory through denotational semantics, type theory,
monads, etc have been largely ignored for the close to thirty years in favor
of recycling the same tired dialects of Algol over and over again.

------
bsaul
Funny how some people believe software programming is one big problem to solve
as a whole, rather than a craft. OO is a one tool in your toolbox. A good
craftman doesn't use one tool, he knows what tool to use for which work.

------
vinceguidry
Inheritance is overused in OOP. There are many ways to share object behaviors,
inheritance only works well when you expect all objects of both classes to
share all behavior except one or two things. Even then, you should investigate
dependency injection before reaching for inheritance.

For the example given for the Triangle Problem, the author isn't clear about
exactly what behavior is being shared among the classes. The top of the tree,
PoweredDevice, gives an indication, but my guess is that there are more
responsibilities than just power, these responsibilities aren't being
reflected in the domain model as they should be.

Instances of a class share behavior with other instances, it is the state that
differs, i.e. the data being stored in the instance variables. In the example
hierarchy, the state being stored is left out of the analysis, but it's the
_first_ place I would look for a missing domain concept. My guess would be
that the most concrete class is going to be models of consumer peripherals, of
which instances are intended to represent actual devices.

In this case a copier, which contains both a scanner and a printer, but not an
actual discernible model of scanner or printer, would simply inherit from
PoweredDevice. That it has this functionality does not mean it need actually
have those in its class hierarchy. It is a job better suited for mixins, or
injected dependencies.

~~~
jghn
From a purist view one could argue that inheritance doesn't belong in OO in
the first place. Alan Kay's first descriptions of Smalltalk & OOP did not
include inheritance concepts.

~~~
imtringued

      class A {
          public int field1;
          public void method1() {
              System.out.println("A");
          }
    
          public void method2() {
              System.out.println("A");
          }
      }
    
      class B extends A {
          public int field2;
          public void method1() {
              System.out.println("B");
          }
      }
    

is basically equivalent to

    
    
      interface C {
          public void method1() {}
          public void method2() {}
      }
      
      class A implements C {
          public int field1;
          public void method1() {
              System.out.println("A");
          }
          public void method2() {
              System.out.println("A");
          }
      }
    
      class B implements C {
          public A a;
          public int field2;
          public void method1() {
              System.out.println("B");
          }
          public void method2() {
              a.method();
          }
      }
    

As you can see both are basically the same except one little detail. The
benefit is that you don't have to write the redundant "method2" method in
class B. So the only time inheritance is ever useful is when you don't want to
override all methods. Now someone tells you to model everything in terms of
class hierarchies even when they don't need it right now and you've negated
the tiny little benefit it ever had which means not having it in the
programming language actually has positive consequences.

------
graycat
I find many of the objects in .NET very useful and use them in my code.

Also in my code I define and use some classes.

I like the idea of classes. E.g., in my Web pages, I have a class for the
user's _state_. When a new user connects, I allocate an instance of that
class. Then I send that instance to my session state store server. To do that,
I _serialize_ the class to a byte array and then send the byte array via
TCP/IP. The session state store server receives the byte array and
_deserializes_ it back to an instance of the class and stores it in an
instance of a collection class. Works great. It's really convenient to have
all the user's _state_ in just one instance of one class. Terrific.

 _Encapsulation_? I don't know what the OO principles say about encapsulation,
but it looks useful to me as a source of scope of names and keeping separate
any members in two different classes that are spelled the same. So, terrific:
When I define a new class, I don't have to worry if the names of its members
are also used elsewhere -- saved again by some scope of names rules.

Actually, I much prefer the scope of names rules in PL/I, but now something as
good as PL/I is asking for too much!

But inheritance? Didn't think it made much sense and never tried to use it.

Polymorphism? Sure, just pass an entry variable much like I did in Fortran --
now we call that an _interface_. Okay. I do that occasionally, and it is good
to have.

Otherwise I write _procedural_ code, and the _structure_ in my software is
particular to the work of the software and not from OO.

I couldn't imagine doing anything else.

I've seen rule-based programming, logic programming, OO programming, frame-
based programming, etc., but what continues to make sense to me is
_procedural_ programming with structure appropriate to the work being done.
E.g., the _structure_ in a piece of woodworking is different from that in
metal working, residential construction, office construction, etc.

~~~
rdtsc
> But inheritance? Didn't think it made much sense and never tried to use it.

This is interesting to read. When OO was popular inheritance was at the top of
the list as the OO killer feature. "your 'Manager' class is just an employee
but with 3 extra methods, so you save so much copy and paste, etc etc".

I believed it at some point. Then perhaps 15 years later, after everyone has
been bit by deep, confusing inheritance they had to maintain and debug,
inhertance is the devil.

It is just interesting to observe how once a selling point of OO is now a big
giant warning to stay away from.

~~~
graycat
There was a lot of hype floating around. There still is. It's like the news:
Always the same just the names change.

------
discreteevent
He qoutes Joe Armstrong's criticism of OO but later Seif Haridi corrected him
leading Armstrong to say:

"Erlang might be the only object oriented language because the 3 tenets of
object oriented programming are that it's based on message passing, that you
have isolation between objects and have polymorphism."

[http://www.infoq.com/interviews/johnson-armstrong-
oop](http://www.infoq.com/interviews/johnson-armstrong-oop)

~~~
contingencies
I'm not sure that's a fair summary. Armstrong states that his opinions have
changed over time, but mostly because his thesis supervisor pointed out Erlang
is perhaps the only truly OO language out there.

After he quotes what Alan Kay says...

 _The notion of object oriented programming is completely misunderstood. It 's
not about objects and classes, it's all about messages._

Armstrong then says:

 _Erlang has got all these things. It 's got isolation, it's got polymorphism
and it's got pure messaging. From that point of view, we might say it's the
only object oriented language and perhaps I was a bit premature in saying that
object oriented languages are about. You can try it and see it for yourself._

If you'd like to read some further other anti-OO ("as practiced") quotes, see
my fortune clone @
[https://github.com/globalcitizen/taoup](https://github.com/globalcitizen/taoup)

------
ryanmarsh
How about we just say this:

OO solves a set of problems albeit with tradeoffs

Functional solves a set of problems albeit with tradeoffs

There. We can all go back to our tea.

~~~
vegabook
yep... I was just dunking a hobnob while writing some good old imperative C.
Did anybody notice that Vulkan, "the future" of graphics APIs, doesn't f*ck
around with OO or Functional?

~~~
qwertyuiop924
Well, that's because nobody except John Carmack advocates FP for the kind of
fast code that complex graphics programming requires. And even he says that it
should be used in moderation.

~~~
Aaron1011
Also, most languages have an FFI mechanism for calling C libraries (if they
have one at all).

------
Kequc
OO is treated almost like a religion by some people. It's useful to be able to
create instances of some things but the place OO fails is the "oriented" part.
Code is much easier to maintain and understand written in a functional state.

If something doesn't need to be an instance, it probably shouldn't be one.

This article articulates a lot of problems I've noticed in OO code, I think it
would be foolish to ignore it. My life as a developer became 10 times easier
once I realised some of these same pain points and pivoted, or maybe even more
so.

In school I was taught all about OO coding practice and I think he's right,
they were wrong.

~~~
lispm
> Code is much easier to maintain and understand written in a functional
> state.

Why is it then that programs written in OO outnumber FP software by 100000:1
or more. For example most of the software written for iOS and macOS are
written in C++, Objective C or Swift. All three are class-based object-
oriented languages.

What you say may not be true or may not be very important.

~~~
rdtsc
Why is there so much COBOL or Visual Basic still around? Or tons of PHP and
Javascript on the server? Just because there is lot of programs written in $x
doesn't mean $x is better, more maintanable, more efficient, easier to
understand. It could be, but it doesn't have to be.

That argument assumes the average developer, average university, average
company can look around and is able to effectively pick, understand (this
implies ability to learn) and evaluate merits of a framework or language. Most
don't even have a choice. They are taught, or maint the code base or listen to
what is told by management and that's their choice. Managers hire to the code
base that's already there and to languages / frameworks they know. Developers
who just want a job (nothing wrong with that) will pick and learn languages
that managers hire for.

------
jhoechtl
Declaring Functional programming as the rescue at the very end of the post is
just not right. FP will gain you something in particular programming
requirements while being just wrong in others.

Looking back now on 25 years in software development, plain old imperative
programming still bought me the most in terms of getting stuff done (Banana
problem). With a decent set of standardisation and sane language defaults a
mostly imperative approach will get you very far.

Golang hits that sweet spot very decently for me. Missing type generalisations
are an impediment from times to times though.

------
StreamBright
Thanks for writing up this. I work with OOP programmers a lot and I am tired
of explaining problems with OOP over and over. This article just saves me that
effort.

~~~
romanovcode
You must be fun to work with.

~~~
radicalbyte
Looks like he works on ETL, i.e. data transformations. So in his niche
functional languages are a better match.

------
MarkMc
I love using object oriented design and find it quite odd when I meet seasoned
programmers who still don't 'get it'. It feels a bit like meeting someone who
says Obama was born in Kenya.

Here's a concrete example of object oriented design:

To understand the problem domain, go to
[https://whiteboardfox.com](https://whiteboardfox.com) and click Start Drawing
> Create Whiteboard, then draw something. Play around with different colours,
erase some lines, try undo and redo, etc.

Now here is my class diagram for implementing it:
[https://s1.whiteboardfox.com/s/7762255cabe34643.png](https://s1.whiteboardfox.com/s/7762255cabe34643.png)

I honestly don't see how you could implement it without object oriented
design. Surely it makes sense to have a Diagram class that encapsulates a list
of strokes and pictures? Isn't it easier if the Diagram class exposes
addStroke() and removeStroke() but does not reveal how it's implemented? And
shouldn't I have a separate view class which encapsulates how much zoom and
pan the user has applied to the diagram?

Could you implement Undo and Redo actions so neatly without a command pattern?

And isn't it lovely that the ViewController can switch between different modes
(Pencil Mode, Eraser Mode, etc) without needing to know anything except a
small interface that is common to all modes?

I actually get a little thrill when I think about how cleanly this design
addresses the requirements. Could I get that feeling if this were implemented
in a functional programming style?

~~~
eru
> I honestly don't see how you could implement it without object oriented
> design. Surely it makes sense to have a Diagram class that encapsulates a
> list of strokes and pictures? Isn't it easier if the Diagram class exposes
> addStroke() and removeStroke() but does not reveal how it's implemented? And
> shouldn't I have a separate view class which encapsulates how much zoom and
> pan the user has applied to the diagram?

Data structures and operations on them are indeed a useful concept. Not
limited to oop, though.

> Could you implement Undo and Redo actions so neatly without a command
> pattern?

Persistent data structures make it even easier. You just keep a list of
references to the old states around.

> I actually get a little thrill when I think about how cleanly this design
> addresses the requirements. Could I get that feeling if this were
> implemented in a functional programming style?

I can't predict your feelings, but what you described could very well be done
in eg Haskell. (You just wouldn't use oop classes, but you can abstract over
similar concepts.)

Have a look at
[http://shaffner.us/cs/papers/tarpit.pdf](http://shaffner.us/cs/papers/tarpit.pdf)
for some thoughts on software design. (The paper advocates using relations. I
have seen them work very, very well in functional settings. Just the opposite
of ORMs.)

~~~
eru
PS I feel your joy on finding a nice way to abstract your domain and represent
it just right.

------
elgoog1212
OO is one of those things best used in strict moderation. Unfortunately, most
people lack moderation, and strive not to necessarily solve the issue, but to
show everyone just how smart they are. As a result we get object hierarchies
10 layers deep, and 1000-line source files (or worse, dozens of 100-line
source files) which don't do anything meaningful.

------
aibottle
God damn it I begin to hate Medium. Just another Bullshit article. When I read
those dips __ts description: "Software Engineer and Architect, Teacher,
Writer, Filmmaker, Photographer, Artist…" Great. And you want to tell me that
OO is dead and functional the only future? Fuck off.

~~~
fbonetti
The self-descriptions people use these days are so ludicrous it's hard to tell
if it's satire. If I had a Medium account, here's how my description would
read:

"Software Engineer, Philanthropist, Astronaut, Shark Hunter, Breaker of
Chains, Lord Commander of the Snack Bin, Protector of the Repo, and Part-time
Cat Dad"

Too much or just right?

~~~
ternaryoperator
"former philanthropist" would give it a kind of edgy feel

~~~
gnuvince
"I went backrupt philanthropying, that's how much I care!"

------
stillworks
Was there really a need for this article ?

What if every Java developer who discovered the immense cerebral gratification
in Scala decided to write an article with the theme "Aww shucks... Frick You
Java, I wasted so much time on you damn it !!! I am going to Scala and I am
never coming back."

Also, the examples the author gives maybe weak. Inheritance breaks my code ?
If it's code I don't own I use dependency management. If it's code within the
same team then code review before commit ?

The reference owning example for encapsulation assumes references are globally
held ?

(PS: Just using Java/Scala here but feel free to vote me down if the
experience is different with other language pairs. Oh also that I am having
dirty dreams of leaving Java and indulge in Scala's monads as I recently
discovered I wasted time on Java)

------
TheLarch
Lisp Weenie assertion that "OO" is a feature list not a solution in of itself,
and that CLOS is embarrassingly better than the OO in C++/C#/Java.

------
vlunkr
The king is dead, long live the king! Thinking that a new
framework/language/paradigm will solve all your problems is naive. The author
should know that if they've truly been programming for decades, as stated in
the article.

~~~
gnuvince
Who said anything about "solving all problems"? How about just making some
things a bit better? I'm get really annoyed when I see the argument "Y does
not solve all the problems in X, therefore there is no point in moving away
from X."

------
pfultz2
C++ has already moved past OOP when it was standardized in the 90s by having a
standard library built around regular types and generic programming. Here is
Sean Parent's talk 'Inheritance Is The Base Class of Evil', which discusses
some the same issues with OOP and the solution in C++:

[https://channel9.msdn.com/Events/GoingNative/2013/Inheritanc...](https://channel9.msdn.com/Events/GoingNative/2013/Inheritance-
Is-The-Base-Class-of-Evil)

------
finavorto
I'm at the point now where I just refuse to read any Medium post titled
"Goodbye, {x}".

------
Artlav
I wonder if someone invented the "modular" programming yet.

Judging by the UNIX paradigm of the command line tools, the idea is clearly
out there.

Instead of objects, do modules - things that do one thing, and carry minimal
dependencies.

You need a banana? Grab the banana module. You need a banana with ice-cream
center? Feed the "center" callback of the banana module with "ice-cream"
instead of "banana intestines".

You need a copier? Grab both printer and scanner.

Is there any existing language that i'm describing now?

~~~
nitsujin
That's functional programming.

~~~
jolux
The irony of this and the comment above it...

------
ahmedfromtunis
I've enjoyed OOP more than anything else. The real issue here is that these
pillars are but low-level building blocks. To fully take advantage of the OOP
paradigm, you need to take a look at DESIGN PATTERNS. They'll solve (almost)
any issue mentioned here. That is, if you know how to apply them, the right
way, at the right time (just like everything else in this damn world).

------
sebastianconcpt
It seems to me that my OOP is so functional that I didn't felt these issues
that bad (it is true that I actively evaded them with the design) and at the
same time it sounds like falling into that is typical of not so great OOP
programmers.

It's curious to see that OOP hate coming from someone that got a chance to
work in Smalltalk.

~~~
rhizome
So many of these essays feel academic. Why doesn't anybody ever talk about
languages in terms of their abilities to satisfy business rules. There is a
whole world (or valley) of MBAs shoehorning great developers into their ideas,
why not optimize for the interfaces _there_?

Of course I may be dumb and what I'm asking for is ColdFusion or Business
Objects or Salesforce or something.

------
davidad_
The specific problem described in the "encapsulation" section is solved in
modern C++ (11/14) by std::unique_ptr. While this may seem like a trivial
quibble, I think it's part of why I find modern C++ quite tolerable despite
disliking almost every other "object-oriented" language.

------
halayli
OO paradigms are not magical and they have a learning curve. They can look
simple and obvious but knowing how to abstract your problems using these
techniques is not simple and it's what differentiates a good programmer from a
bad one.

It's easy to complain about them but in most cases I see it's a misuse issue.

------
juliangamble
This article makes the same argument but with better reasoning:
[http://www.smashcompany.com/technology/object-oriented-
progr...](http://www.smashcompany.com/technology/object-oriented-programming-
is-an-expensive-disaster-which-must-end)

------
adamnemecek
I think that fundamentally OOP and FP are both necessary for any language that
wants to run relatively close to the metal.

The reason is that a computer is fundamentally all about state and you need
something to manage the that state. This is the antithesis of FP. OOP manages
state somewhat nicer.

~~~
lotyrin
Actors (classes that call with true async messages, potentially over the
network) composed of objects (classes that call with simple sync stack) which
use pure functions whenever possible.

Explicitly handled state + mutations, polymorphism, implicit safe concurrency,
fast local calls + opportunities to optimize + safe and easy to reason about.

~~~
adamnemecek
We are agreeing right?

~~~
lotyrin
Absolutely. I just wanted to put in a mention for actors.

------
ern
We keep getting caught in theoretical cesspits. Perhaps the way forward is to
reduce our focus on philosophical discussions of programming paradigms, and to
iteratively figure out, using well-defined metrics and outcomes, how best to
develop software(and to define these in the first place). Taste, one-size-
fits-all trends and hype are what drive the industry, and we tend to ignore,
or hopelessly lament, the (unmeasured) waste that results from these.

And then, once we have hard data, we should have the courage to follow the
data, even if it means throwing away our cherished pet paradigms and
methodologies.

------
dhab
As someone who recently started learning FP in Haskell, I think one cannot
look at individual parts and compare OO to FP. I find that while both have
strengths and weaknesses, in FP the sum of parts is much greater to appreciate
than in OO with comparable energy invested in them in problem areas where
performance is critical, but not too critical.

That has been my cumulative verdict so far learning FP - perhaps this view
would sway one way or other as I learn more about it

------
Yokohiii
Fast reading through the article I was already prepared that OP would shift to
FP. OP should assess his own fallacies and not blame imperfect concepts. One
can probably improve certain things switching paradigms, but we as humans fail
at conception, communication and complexity (althought we can brute force the
latter). There is no language that can solve this problems sanely and it is
questionable that any can.

------
mempko
This person has been doing class oriented programming for years and calls it
OO. He will try structured programming with recursion and call it FP now...

------
prashnts
I find the `Printer + Scanner ~= Copier` example poorly designed.

Sure, the Copier has both Printer and Scanner, however, in practice, the
"Start" function on a Copier differ from either -- it starts the scanner and
forwards it to the printer. It might also print multiple copies.

Point being, the `start` functionality here differ from both Printer and
Scanner hence, the `start` method shouldn't be inherited.

------
skocznymroczny
Scanner and Printer can be made interfaces, then Copier can hold reference to
IScanner and IPrinter, it doesn't have to care about their concrete
implementations, as long as it's something that has a scan() method and
print() method, for all the copier cares it doesn't have to be a powered
device at all, it could be a cloud printer and a scanner located 1000 miles
away.

------
Waterluvian
My experience with ReactJS has been the first time I felt I had the perfect
balance of OOP and FP.

The components are so well defined as objects since they have the luxury of
being tangible. But using them in a pure manner with zero local state makes
them so easy to reason about and reuse.

More can be said about Redux but I'll leave it there.

~~~
domlebo70
What is OO about React though? You have data, and pure functions. Where does
OO come into it?

~~~
Waterluvian
You can create components from functions or as classes. I will create a button
class, which has some methods for how it renders and updates itself and how a
user interacts with it. I can then subclass it to make a toggle button or
whatever else.

Say I have a form with 4 fields and a "Submit" button. The submit button's
`onClick` tells its parent that it was clicked. The parent calls `getValue()`
on all of its children (the form fields) and dispatches a `updateMyDatabase()`
action.

My example may be controversial to some as my form fields have state. Some may
say that on every keystroke, the form should have an action called that
updates the `centralState.formValue`. But regardless of that, I think it's
still evident how there's OOP involved?

------
matchagaucho
I would rather continue using the functional features of Java7 and C# than
switch entirely to Erlang/Scala.

Usage of interfaces, immutable _final_ keyword, and anonymous methods are
powerful and flexible enough to move beyond the constraints of _pure_ OOP.

------
stevesun21
a programming paradiam can be accepted massively not because people hate the
predecessor, it's because the new one is more intuitive and useful. If you
hate oop so much, then approve how it is counter-intuitive compare to fp. Keep
complaining make you sounds too emotional, as a SE you should know how to
objectively analysis.

FYI, in OOP paradiam all inherent, encapsulate and so concepts are for one
goal – design a better interface, that also follow how the real world be
designed, for example, power outlet at you home.

------
rukuu001
Makes me feel like writing "Goodbye, Functional Programming" and making my
case with a bunch of bad development practices.

A good programmer writes good programs.

The tools don't really come into it.

------
moron4hire
No project ever failed specifically because of the paradigm--or programming
language, even--used to implement it. Project failure is a people problem.

~~~
Shorel
Monotone (when compared to git).

[https://en.wikipedia.org/wiki/Taligent](https://en.wikipedia.org/wiki/Taligent)

[https://en.wikipedia.org/wiki/AI_winter](https://en.wikipedia.org/wiki/AI_winter)

These are two C++ and one Lisp failure. And language did play a big role in
each failure.

~~~
moron4hire
You have completely misinterpreted these projects.

------
mirap
So, show me better approach than objective programming, offer me any solution.
Otherwise this article is just pointless complaint.

------
Clubber
Regarding:

Class Copier { Scanner scanner; Printer printer; function start() {
printer.start(); } }

\---

Placing a Start() in a PoweredDevice base class doesn't make sense in the real
world. There are plenty of "powered devices" that don't have start buttons. A
phone, a fish tank pump, a smoke alarm, none have a "start." A powered device
should have just that, a PowerOn() and PowerOff() or SetPower(bool isOn). I
wouldn't even create a PoweredDevice base class unless you have a reason. This
is the main fault in your design.

Scanner.Start() should return a byte[] which is the result of the scan: byte[]
Scanner.Start(); A scanner is an input device.

Printer.Start() should take an argument of byte[] as to what it is to print:
void Printer.Start(byte[] byteArr); A printer is an output device.

Having said that, your Copier class would look like this:

    
    
      Class PoweredDevice
      {
         void SetPower(bool isOn)
         {
           ...
         }
         // Start() doesn't belong here.
      }
    
      Class Copier : PoweredDevice
      {
    
        Scanner scanner;
        Printer printer;
    
        void override SetPower(bool isOn) {
            printer.SetPower(isOn);
            scanner.SetPower(isOn);
            base.SetPower(isOn);
        }
    
        void Start()
        {
            byte[] document = scanner.Start();
            printer.Start(document);
        }
      }
    

This can easily be enhanced to handle copy counts:

    
    
        void Start()
        {
            byte[] document = scanner.Start();
            for (int x = 0; x < copyCount; x++)
                printer.Start(document);
        }
    

Ideally you wouldn't even make an inheritable Start() method. The Scanner
class would have a byte[] Scan() method and the Printer class would have a
Print(byte[] byteArr) method. You're trying to ram a square peg into a round
hole. Use inheritance when it is convenient and makes sense to do so. Don't
force it. Think, what does a scanner and a printer have in common that works
the same, then put that in your base class. A power button is about it.

A lot of inheritance is done backwards. You make your classes then find
commonalities and put that in your base class. Only create a base class first
if you've thought about your object model and you know the commonalities.

Also, there is no reason to make your inheritance chain deep, just because.
Build your objects in a way that makes sense. Don't write code or base objects
you will never use. You can always insert a class in the chain when necessary.

Mastering OOP is hard, and people who have mastered it get paid a lot of money
for their skill. It took me a few years to really understand how to design
with it. It's invaluable though. A good object model is a thing of beauty, and
a hell of a lot of fun to design.

Edit: I don't know why the editor won't keep the CR's.

~~~
Nullabillity
Indent it with 4 spaces to make HN recognize it as a code block.

~~~
Stratoscope
It's two spaces, but of course four works too.

[https://news.ycombinator.com/formatdoc](https://news.ycombinator.com/formatdoc)

~~~
Nullabillity
Whoops. Too used to Markdown, I guess.

------
JustSomeNobody
And Hello, Clickbait headline!

------
adamconroy
Hey, ingve. stop trolling us. you wasted 5 minutes of my life by posting this.

------
PhasmaFelis
Oh, is it time to declare a popular and widely-used thing dead again?

------
MawNicker
Object Oriented Programming simulates the restrained reasoning capacity of the
real world. This is done by weaving state into every conceivable unit of
computation. The result is a universal and inescapable notion of identity.
It's a state conspiracy! Sometimes you _are_ actually interacting with the
real world and this is an appropriate constraint. That is only because, in the
_real_ real world, these things are pervasively intertwined. Right down to the
smallest phenomena we've been able to observe. We can't actually take them
apart except for in our minds. To do so is a very old idea, pervasively
apparent in western thought, called platonic realism. I internalized it as an
unknown known at some point. I imagine that's just how people did it before
someone as smart as Plato was able to articulate it. It's sort of _the_
doorway to abstract thought. Most mathematically inclined people have ventured
into the depths of the world it conceals. It's necessary in order to properly
understand the concept of a "value". When these people first start to program
they rely heavily on expressions and functions. They tend to atomize complex
values with simple structs. They don't know they're doing it but they're
writing "functional" programs. It might be more apparent if we just called
them mathematical or algebraic programs. They demonstrate a preference for
referential transparency without knowing what it is. Much of their code is
outright stateless. They're hesitant to use a "var" as anything but a "let".
Many seem to immediately grasp the simplicity and generality of recursion.
They have to have it pried away from them like it's a dangerous recreational
drug. That recursion is not "optimal" is simply presented as an engineering
reality. Always intent on incremental improvement they diligently internalize
these "optimal" representations utilizing loops and state. They're tricked
into feeling they've acquired a worthwhile skill; They don't know they're
doing what a compiler ought to. They learn to reserve the truly optimal
representations for their minds eye. With the desire to utilize their new
"skill" they move towards external representations that could only be
considered "optimal" by an unconscious machine. All of this damage is done in
the earliest stages of learning; Probably before they've even attempted any
significant programmatic interaction with the real world. That's when
everything gets worse. They start trying to coordinate too much state and they
can't cope. They're told they need these object things. _Everything_ seems to
get easier: Sockets, Widgets and even the Lists that had been such a struggle
to use before. They choke down the declaration syntax and hastily strap their
newfangled constructor and destructor gadgets onto their toolbelts. These
_are_ excellent tools for arbitrating the abstract world and the real one. The
ability to hook into their creation and destruction provides abstract objects
with a canonical state-of-existence. This is necessary to fully simulate the
identity possessed by real objects. For the purposes that they've learned
them, objects are immediately and overwhelmingly useful. They come to
appreciate the clarity of the method invocation syntax for manipulating state.
They're right to do so. The functional languages themselves even sort of "do"
it. Tragically with their most fundamental notions of computation already
brutally violated by the state conspiracy, they're vulnerable to seeing
objects as a universal paradigm. Everything is an object. _Everything_. They
ascribe pet-hood to their little objects and feel driven by the satisfaction
of teaching them their own special tricks. Each and every one of them is an
excessively black box. Some go so far as to make social-networks called UML
diagrams to protect them from inappropriate "friends". They have forgotten the
elegant abstract world that was left for them by the intellectual giants of
history. They descended from it in pursuit of mere performance and are in
serious danger of never returning. To act like it's just another way of
looking at things is a brutal misunderstanding. It's a discipline that resides
entirely within a much larger one that it is not a suitable replacement for.
Despite the confusing desperation of non-academics for it to be that. Even
it's creators are disappointed by it's dominance.

~~~
PhasmaFelis
I don't want to be rude, but that is nearly impossible to read without
paragraph breaks.

~~~
MawNicker
Well I can't edit it now. It was mostly a rant anyway. The widely held
assumption that OOP is a universal paradigm has done unfathomable damage to
Computer Science. It implies nothing more than that every Turing machine is a
Turing machine. Then after ignoring every other factor it declares itself
supreme based on nothing but it's adjacency to the one Turing machine we were
already stuck with. I constantly see people fall for this contrivance in
various forms. They just can't stand up to it when it's presented alongside
the credence of industry.

------
GFK_of_xmaspast
The author's beef with encapsulation seems to be that when an object A is used
as an argument in the constructor to object B, the latter needs to do a deep
copy (as keeping a pointer is not "safe"), which is of course not always
possible.

I'm at a loss as to what this has to do with encapsulation, and even less able
to understand how any language with user-defined data types is going to be
able to avoid it.

~~~
huahaiy
Languages with immutable data structures?

~~~
astrange
Or linear types.

Actually, immutable containers isn't good enough; you need all the leaves to
be immutable too.

------
RantyDave
Author writes tightly coupled architecture, discovers it sucks. So, of course,
moans about OOP.

~~~
dang
Please don't post snarky dismissals here. Thoughtful criticism is fine.

