
Object-Oriented Programming is Garbage: 3800 SLOC example [video] - howsilly
https://www.youtube.com/watch?v=V6VP-2aIcSc
======
pdkl95
While I'm sure the topic (and title) may be seen as hostile, this is a very
nice code walkthrough and discussion.

The arguments against OOP seem to be 1) OOP obscures data structures by hiding
them in too many data types, and 2) the boundaries created by these data types
forces workarounds like extra fields to hold a root structure pointer even
though it isn't part of the type.

OOP - _in practice_ \- does tend to encourage far too many types. A lot of
these types are often represent code or design decisions, not the actual data.
Instead of a "few" structs that clearly show how the data is organized, we
tend to see OOP code that has the data structure spread out over various
classes, interleaved with types that actually represent different actions
(with the same data).

The solution I always try to keep in mind is that OOP can be useful _if-and-
only-if_ the data actually has a hierarchy. A single standalone class is
equivalent to a basic struct passed to functions. The class only becomes
worthwhile when are able to use polymorphism that closely model the data. The
serious problems start when you try to force _everything_ into a class; in
extreme cases you end up with the Kingdom of Nouns[1]

The second argument seems to follow from the first; if you have too many
types, you tend to take shortcuts like keeping a local reference to a parent
or global object that is really outside your scope. This tight coupling
creates maintenance problems and further confuses what the type is actually
representing.

[1] [http://steve-yegge.blogspot.com/2006/03/execution-in-
kingdom...](http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom-of-
nouns.html)

~~~
stcredzero
_The solution I always try to keep in mind is that OOP can be useful if-and-
only-if the data actually has a hierarchy...The class only becomes worthwhile
when are able to use polymorphism that closely model the data._

You don't need a hierarchy to have polymorphism. The Observer pattern is very
useful in my space game in Go. I wrote a small pair of Interfaces which makes
it very easy for certain objects to register to receive notification when a
ship despawns.

    
    
        type DespawnListener interface {
        	NotifyDespawnShip(Ship)
        }
        type DespawnNotifier interface {
        	SetDespawner(Ship)
        	AddDespawnListener(DespawnListener)
        	NotifyDespawn()
        }
        type basicDespawnNotifier struct {
        	ship       Ship
        	dependents map[DespawnListener]bool
        }
        func (self *basicDespawnNotifier) SetDespawner(ship Ship) {
        	self.ship = ship
        }
        func (self *basicDespawnNotifier) AddDespawnListener(dsl DespawnListener) {
        	self.dependents[dsl] = true
        }
        func (self *basicDespawnNotifier) NotifyDespawn() {
        	for key, _ := range self.dependents {
        		key.NotifyDespawnShip(self.ship)
        	}
        }
    

My framework sends NotifyDespawn() to any despawning ship. Now anything that
has registered interest in that ship, like missile locks, tracking AI ships,
or tracking missiles, will be notified when that ship despawns. Note that this
makes the system extensible. Right now I have each Ship entity tracking
dependents in its own map, which it can do by composing a basicDespawnNotifier
into its struct. Later, I will move the implementation guts into the object
that implements the game loop/instance, which will eliminate the map from
despawning entities, reducing GC pressure, and allowing me to cut-off circular
dependencies. Note that all of the game code using the two interfaces won't
change at all.

No hierarchy is necessary in Golang or Smalltalk, because both have implicit
interfaces and allow composition. (Golang is nice because its implicit
Interfaces are made explicit and typesafe.)

The op is somewhat misguided. Good OO is comprised of good design, just as
good computer games are comprised of good design. Expecting good OO of any
programmer who just knows the "rules" of OO is like expecting the output of
all game jam hackathons to be good games. Of course half of all OO projects
found on Github with "obligatory" OO are going to show below average design,
just as not every game from a game jam is not necessarily a masterpiece.

~~~
marcus_holmes
I think you've just reinforced op's point instead of showing how he was
misguided.

Saying "Good OO is comprised of good design" is meaningless - good code
regardless of methodology is comprised of good design. You can't have good
code unless it's been designed well. Good design is not a feature of the OO
methodology, they are two independent properties of a program (design quality
and methodology).

"Expecting good OO of a programmer who just knows the rules of OO" is exactly
what OO was sold to us as achieving. By following the rules of OO design we
were supposed to be able to automatically have better programs that were less
likely to end up as spaghetti. If we wrote lots of small classes that just did
one thing well and encapsulated their state properly we would magically have
programs that were easy to understand and maintain, and we would be able to
re-use the classes in other programs!

This was hardly ever achieved by anyone, and if it was it was independently of
OO methodology not due to it. As you point out, reinforcing the op's points,
it's very easy to get OO very wrong, even for experts.

~~~
stcredzero
_I think you 've just reinforced op's point instead of showing how he was
misguided._

I can change the underlying implementation of this subsystem, giving it
additional capabilities, modulo some find-erase and compiler-guided changes,
which I can be 100% sure are done correctly. Please give your own example of
code which has the above properties.

 _" Expecting good OO of a programmer who just knows the rules of OO" is
exactly what OO was sold to us as achieving._

As someone who was around for all of that rigamarole since the 90's, I am
making precisely the point that this is wrong. The same goes for functional
programming as well. The same goes for playing music or designing the inside
of your house. There's a little practice, iteration, and self criticism that
has to happen. Otherwise, what you get is pretty mediocre.

------
chriswarbo
I enjoyed this. Skipped through the parts about the inner NES workings, but
the "embarrassing" video is more to the point and shorter. I found myself
agreeing with a lot of points and disagreeing with some too.

It's interesting how many comments there are, here and on YouTube, along the
lines of the examples being "bad code, regardless of OOP", "not representative
of a large team/project", "unmaintainable", etc. without even a link to an
example snippet or repo, let alone a narrated walkthrough of existing code and
possible alternatives like the author has provided.

Without concrete examples, all of these counter-arguments are in danger of a
No True Scotsman fallacy. In fact, the code in the "embarrassing" video are
actual examples of respected OO programmers showing how OOP should be done,
which is a high bar to refute.

The author also addresses this a little, by pointing out the often-invoked
circular reasoning of "my software's a mess even though I followed OOP", "if
it's a mess you must not have been doing OOP properly".

~~~
Chris2048
Sometimes software methodologies are a little like the stone-soup story;
vaguely defined such that when it turns out bad, you can always wriggle out of
it by claiming it was done wrong, and there's a better way to do it that
works. When you follow up for details, the full "right way" has such a high
bar, no one but a few true believers are gonna actually follow it.

I want a methodology that provides improvement on average, even with only
partial uptake. Many other best-practises are independently virtuous, hence
have this quality.

------
thunderbong
Previous videos

Object-Oriented Programming is Bad
[https://www.youtube.com/watch?v=QM1iUe6IofM](https://www.youtube.com/watch?v=QM1iUe6IofM)

Object-Oriented Programming is Embarrassing: 4 Short Examples
[https://www.youtube.com/watch?v=IRTfhkiAqPw](https://www.youtube.com/watch?v=IRTfhkiAqPw)

I'm watching the 'is Bad' one, and honestly, it rings too close to my
experience.

Need to think about this a bit more deeply!

~~~
zby
Why is it all videos?

Text is so much better for evaluating arguments. When reading you can spend
variable time on different parts to skim the obvious and go deep into the more
difficult stuff, you can random access it later to refresh your memory, you
can quote it.

Videos are probably only better in persuading people - but that requires that
they watch them (and maybe get some stockholm syndrome after wasting so much
time :).

~~~
spuz
It's not all videos. See Brian's article about OO here:

[https://medium.com/@brianwill/object-oriented-
programming-a-...](https://medium.com/@brianwill/object-oriented-programming-
a-personal-disaster-1b044c2383ab)

and the HN discussion here:

[https://news.ycombinator.com/item?id=10933330](https://news.ycombinator.com/item?id=10933330)

~~~
zby
Thanks for the links.

Still apparently the whole argument is only in video:

>>> [Edit: I’ve greatly expanded on these ideas in a 45 min. video.]

------
lyudmil
This is a stimulating argument to think about.

One possible counter-argument is that the resulting procedural code is more
difficult to unit test. In the video Brian acknowledges that he doesn't think
much about cyclomatic complexity, but I think if you were to try to test one
of these long functions with a lot of branches, the setup required for each
case you want to test would dominate the amount of code devoted to the actual
test. The solution to this isn't necessarily OOP, but I would not want to
maintain Brian's code as it stands without refactoring it.

The more fundamental problem to me is that Brian's code requires you to read
it fully in order to understand it. No part of the code is difficult to
understand, but it demands that I understand it in detail before I can have a
summary of it in my head. I prefer for code to read more like a newspaper
article, where the further down you read, the more detail you're given, but
you can stop at any point and have a decent understanding of what's going on.
To me this makes it easier to reason about the system as a collection of sub-
systems, which is often the level at which the really important decisions are
made. Again, OOP isn't the only "solution" for this, but I think it's a good
tool. It does have the potential, as Brian points out in one of his earlier
videos, of devolving into philosophical exercises, but I don't think that's
necessarily a drawback if your goal is to make it easier for humans to
comprehend your system. In designing your system this way, you're simply
taking advantage of people's natural ability to think in abstractions.

~~~
piokuc
To me, the idea of inlining functions that are otherwise called only once is
complete garbage. Yes, you reduce number of functions in your program but the
program doesn't get easier to read, test, develop and maintain as the
functions that you leave get longer... I stopped watching when I learned that
that's presenter's idea of improving the original program. And it has
_nothing_ to do with OOP and its flaws, you could as well "improve" this way
well written functional programs... It's just wrong. Inlining functions is
something your compiler is supposed to do for the programmer, a programmer is
better off breaking his problem into small functions instead.

~~~
yoklov
Here's a fairly well reasoned argument to the contrary: [http://number-
none.com/blow/john_carmack_on_inlined_code.htm...](http://number-
none.com/blow/john_carmack_on_inlined_code.html)

It boils down to the fact that it's very easy to have unexpected state
dependencies/constraints in those smaller functions, that is non-obvious by
just reading them. If they're called when those constraints are not met, it is
a bug.

~~~
piokuc
That's why one should try to make functions pure as much as possible.

------
mpweiher
"Sure 90% of OOP is crap. That's because 90% of everything is crap" [1]

[1]
[https://en.wikipedia.org/wiki/Sturgeon%27s_law](https://en.wikipedia.org/wiki/Sturgeon%27s_law)

~~~
calibraxis
Having just dealt with it again, I'm tempted to grumble that Java's OOP is
firmly within that 90%.

What happens when something is recursively buried in 90% crap? Is its own 90%
particularly crappy?

~~~
ArkyBeagle
It's more like the limit converges on some x in 0..1 with 90% being the most
probable.

------
raverbashing
OO in itself is not bad

But of course some people want to create 10 level class hierarchies, replace
every 'if' with a subtype and using design patterns ate every occasion
(especially the factory, so instead of writing 1 or 2 they will have a
OneIntegerFactory, a TwoIntegerFactory, etc)

One advantage into Go for example is that it reduces the potential of misuse
of "OO happy" people

~~~
notinreallife
My guess is that most "OOP-happy" people never get a chance to see beautiful
procedural code so they're just afraid of it. A lot of developers nowadays are
trained to smite any code that isn't following OOP principles or design.

~~~
nibnib
Not necessarily afraid, possibly just unfamiliar. I can remember programming
like this because C++ was the first language I knew well and I was interested
in using all the available features. In my mind that was just how things were
done. When I had more of an incentive to just Get Stuff Done and keep things
maintainable simpler procedural code made a lot more sense.

------
charlesism
It was 3800 lines of code before, and it's 3800 lines of code after. Let's
spend the remaining 45 minutes discussing why my personal preferences are
correct, and OOP is garbage.

~~~
loup-vaillant
The procedural version have less files, and less functions/methods. This
indicates that many functions in the original code were called only once, so
we might ask why they were broken up in the first place.

Having that many functions, that are called only once, means you have to make
more non-local jumps to read said code. This is not ideal.

Moreover, such a break-up obscures the dependency graph in the code. Many
nodes (functions) will have only one inbound edge (call). While collapsing the
original graph to a canonical form will likely yield the same result on both
code bases, the original code still show 3 times as many nodes. This makes it
harder to see the real dependencies. Again, not ideal.

Finally, there's the possibility that the procedural version made genuine
simplifications to the dependency graph. If it really reduced coupling while
keeping the LoC count the same, that's a net win.

Granted, those measure are not nearly as definitive as the LoC count itself.
Still, they do suggest the original code is not as straightforward as it could
be.

Now to really know, we'd have to read the code.

~~~
WatchDog
Spitting discrete parts of functionality into separate functions, even if
called once makes the code more readable even if they only get called once.
You don't clutter the would be calling functions scope with variables only
needed within the would be in-lined functions body. When you read the code you
the function name and return type can describe what is being done rather than
trying to grok the body of the function code itself.

~~~
adnzzzzZ
It might make the code more readable but it makes you jump around more and
adds more complexity to the code base. Now you have to consider: under what
conditions does this function get called? Does it get called more than once?
In which file is it? And so on. Every time you add another node to your code
base you're making it more complex, and in this instance it has very
questionable gains. It makes as much sense to just say in the first line of
this block of functionality what it does to what.

~~~
WatchDog
I guess im used to working with languages with excellent IDE support. Having
more files and jumping around the code to see the detail don't seem to be a
problem for me when using a good IDE. While I only got through around half the
video I can see hes created 500 line functions with 5 levels of flow control,
personally I find that much harder to follow than clicking though to a
function or just reading he functions name and API.

~~~
Narishma
Those 500 line functions include private sub-functions though.

------
Nr7
I don't like the clickbaity title. OOP != garbage, badly done and/or
unnecessary OOP == garbage.

~~~
sklogic
Given that it is unnecessary in 99.99% of the cases, it is always safe to
assume that it is just a garbage, period.

~~~
Nr7
I've worked in projects that use OOP and projects that use procedural style
and personally my brain seems to understand the OOP styled code much better
than the procedural stuff. Do I then go around calling other paradigms garbage
just because my brain seems to function more in an Object Oriented way than
other? Of course not. Do I think OOP is then automatically better than other
because I understand it easier? Of course not. I stick to OOP because my dumb
brain can't figure out other paradigms as well.

Of course OOP is unnecessary since you don't _need_ it to write good software.
But one could argue that _all_ paradigms are unnecessary since you could
technically write any program using any paradigm. OOP, procedural, functional,
whatever. My point is that people are different, use whatever you are most
comfortable with.

EDIT: Check Bob Burrough's comment to the youtube video. He worded it much
better than me.

~~~
sklogic
"Unnecessary" is the mildest of the OOP issues. A more common one is "damaging
and destructive", and you cannot defend this one by simply saying that "my
brain understands it better" \- it won't magically become less damaging
because of this.

The truth is that OOP is in most cases the worst possible way to model the
real world. There is always a much more adequate model (besides the single
case I mentioned before). So, sticking to a paradigm that does not work at all
is nothing but a religion and should be treated as such.

~~~
bubuga
> A more common one is "damaging and destructive"

That's a baseless assertion. Just because you developed a distaste for a
particular programming paradigm it doesn't mean that your baseless assertions
are suddenly sound.

> So, sticking to a paradigm that does not work

Somehow, the software world is founded on something that you somehow claim
doesn't work.

Perhaps you've taken your pet peeve a bit too far.

------
enqk
Methodology wise this approach is problematic, because as a refactorer he gets
to stand on the design discoveries made by the original programmer.

~~~
dang
Spot on, but this is an insight whose time hasn't yet come. For one thing, it
means that controlled comparison of programming approaches is even more
impossible than we already know it to be, which is a bummer, which is a reason
not to have the insight.

What the OP did is still a worthwhile exercise. And although it seems to have
been an accident, he made a good choice of program to study in this fashion.
Because it is an emulator, what it needs to do, and therefore much of its
design, was predetermined before the first program was written. That makes the
comparison with the second program somewhat less vulnerable to your objection,
though we have no way of measuring how much.

~~~
enqk
Good insight, I actually was wondering if the choice of the object (Emulator)
was a good or bad one.

You're making a good case that it reduces the effect of initial design
discovery since it's based on well known hardware.

------
sz4kerto
3800 lines of code is an extremely small program. OO can help when you have a
large business project, large team with people, some of whom are possibly not
great coders.

Showing that one single person can write a small procedural program that's
more structured, faster, etc. than the corresponding OO code written by
someone else is not very relevant for real-world projects. I'm not saying OO
is great everywhere, but devs in the industry have been writing working code
on large scale using OO.

~~~
sklogic
What you're talking about is just better code organisation and modularity.
It's a shame it is always brought up in defence of the useless OO, as if
modularity is an exclusive OO property and not a mere side effect of it.

~~~
bubuga
> as if modularity is an exclusive OO property and not a mere side effect of
> it.

Modularity isn't a mere "site effect" of OO: it's one of the main motivation
factors that have lead to OO.

In fact, the main problem with these anti-OO rants made by procedural purists
is that in essence they all boil down to claiming that basic OO features that
are a part of any OO language can be reimplemented in procedural languages.

~~~
sklogic
> it's one of the main motivation factors that have lead to OO.

OO got no rights whatsoever to highjack the benefits of the modules. There are
some very successful module systems that got absolutely nothing to do with any
OO bullshit.

~~~
bubuga
> OO got no rights whatsoever to highjack the benefits of the modules.

You're the only one who's trying to make that assertion. It's a strawman, and
a poor one at that.

------
insulanian
Can someone provide TL;DR? Can't access YouTube from office :(

~~~
spuz
Read the initial article that spawned this series of videos here:
[https://medium.com/@brianwill/object-oriented-
programming-a-...](https://medium.com/@brianwill/object-oriented-programming-
a-personal-disaster-1b044c2383ab)

------
gelraen
Got through first 5 minutes and I already see that myself and the author have
very different opinions on what constitutes "good code":

1) having data structure and its methods isolated in a module is not a bad
thing. Modules provide an obvious boundary between pieces of code, so when
calling some code from another module you know that you should care only about
inputs and outputs and not about the particular implementation of the
transformation. But when you have the same pieces of code mashed together in
one module – that boundary is blurred, now compiler does not tell you that
they don't interact via some shared state not named explicitly.

2) defining an interface to do a type switch. Really? Even if that interface
is not exported, this still leads to more fragile code. When you rely on
interface compiler will yell at you if you miss a method. But when you add a
new type and forget to update one of those type switches – it will blow up in
your face at run time. This is less relevant for NES emulator, since it was
designed long ago and unlikely to get new entities that need to be accounted
for, but doing this for no good reason can establish a habit that will bite
you later in other projects.

------
smarx007
I just want to note that original code wasn't covered with unit tests and the
rewrite is not going to make it any more suitable for testing (as you
expected, no tests were added during the rewrite). On the critical note,
[http://pythontesting.net/strategy/why-most-unit-testing-
is-w...](http://pythontesting.net/strategy/why-most-unit-testing-is-waste/) is
a good complementary read.

