
What Made OO Be Successful - rbrogan
http://rbrogan.github.io/what-made-oo-be-successful.html
======
cousin_it
I think Java-style OO was successful because it made code organization
obvious, more so than any other programming language I've seen. The key
insight is that source files are the natural unit of writing code. Each source
file has an explicit public API and private internal state, and a dedicated
namespace that always reflects the file's name and location in the source
tree. That makes it easy to see the big picture by just looking at the
directory structure and the imports.

~~~
AlexanderDhoore
But what you describe are basically namespaces. All the rest of OO is
overkill. Recipe for success: take a simple language with only functions (like
Lisp or C) and add namespaces. Result: this language can do 90 % of what OO
languages are useful for, with only 10 % of the features.

~~~
andrewstuart2
OO is simply a layer of human-readability on top of functions and data. As far
as the CPU is concerned, both object and method arguments are just data to
operate on. So `steve.drink(water)` in another context is just `drink(steve,
water)`, which is a little more difficult (at least in English) to reason
about.

OO principles are for the limitations of people, not computers.

~~~
chongli
I should hope it'd be obvious to most people that you don't actually need all
of OO just to get that little bit of syntactic sugar. I'd also like to point
out another obvious fact: this idiom completely breaks down when the subject
happens to be more than one person, i.e: steve_and_irene.drink(water) whereas
it works fine in the procedural case (thanks to varargs):
drink(water,steve,irene,...).

~~~
seanmcdirmid
And you don't need syntactic sugar to do OOP, as you have just demonstrated
with your examples. And also, multi methods exist, but you probably want one
subject/dispatch variation per sentence anyways for complexity reasons.

------
pavlov
I think the answer lies in the late '80s computing zeitgeist.

During the preceding decades, the object-oriented paradigm had been shown to
be highly useful in simulation (Simula) and graphical UIs (Smalltalk), but
these use cases were still far removed from the majority of real-world apps.

A lot of people looked at OO's phenomenal success in these fields and
extrapolated that all software would soon look like that. "GUIs are going to
be the future. Also, all of our software works with real-world stuff. Money
exists in the real world, right? Therefore it's an object."

Java certainly didn't start the OO hype. By the time Java hit, the hype had
already been in full force for at least 5 years... Borland's CEO announced
sometime around 1992 that they would defeat Microsoft by adopting a "full OO"
development workflow. At the same time, Microsoft was hyping its own OO
concepts and frameworks like MFC.

But Java definitely hit the sweet spot of multiple hype cycles at once with
its combination of OO everywhere, web-oriented applet deployment and platform-
independent VM.

------
tboyd47
> An object is both code and data, but also neither. This means people can
> talk about objects and yet not have to talk about either data or code.

These two sentences jumped out at me. I really like this analysis, because it
would explain why object-oriented thinking especially seems to be prevalent in
hierarchical organizations. It allows any number of minds to contribute to the
design of a system without any knowledge of the technical details of that
system.

~~~
waps
Object orientation works because it worked for windows (and other UI toolkits,
even today) when computers were switching to graphical user interface
toolkits. And frankly, object-oriented UI toolkits work a lot better than
webdev even today.

But OO doesn't work because webbrowsers have detonated a hydrogen bomb of
"historic reasons for this API" over the worst UI toolkit ever, and
abstractions, whether OO or ..., simply don't work well over this. They make
it extremely hard to find where the OO toolkit is incompatible with vendor X's
browser version 82.2.1.3.3.883.1.3.4, and harder yet to fix the problem.

It seems like 2014 was the year of having programmers rant against every form
of abstraction because they can be misused and because they "hide complexity".
Well of course they hide complexity. That's the point. That's what allows us
to raise the abstraction level of what we're doing and working on.

A simple shared networked database (also known as CRUD, or to some extent MVP)
was a solved problem on NextSTEP before I left kindergarten but in 2014 we all
insist on rewriting the whole thing from scratch (can't even use an actual
database anymore "it doesn't scale").

When I was a 12 year old discovering dBase writing a networked CRUD
application was something you'd do in roughly an hour. Today, a week is not
that bad, apparently. When I turned 14, and worked with dBase for windows,
putting rich text fields (with working copy-paste from office documents) was
less work and finished faster, by a factor of 100. First I wrote them in
things like dBase, then Delphi. Then I started writing them in PHP + SQL. This
was tedious, but at least it was vaguely manageable. Today, I write them in
Bash (for deployment) + Go + SQL + Python + Javascript + JSON, not counting
the various configuration languages I need to know. They don't work nearly as
well as the things I made in dBase for windows, without a doubt they are less
functional than the tools I wrote in Delphi, and they can do vastly less on
computers thousands times faster (my first Delphi was installed with 66mhz,
16M RAM and 500M drive) without weeks of work they don't look nearly as good,
and they are utterly and completely incompatible with everything else on the
planet.

This is why we had OO (the very first book I had on the subject started with
how to make OO work for an extensive text editor). That's what's good about
it, and why we're not doing it anymore ... I don't really know.

Today we are locked into the web. Any end user has no control whatsoever over
their files, over what programs his computer runs (my "standard" text editor
is in javascript and gets it's code downloaded from an internet provider that
could change it at any time. I, of course, cannot change it), over who can
read their files/mail, governments, MPAA, RIAA, and so on get people's mail
clients blocked and their mail made inaccessible on a regular basis, ...

[http://en.wikipedia.org/wiki/Worse_is_better](http://en.wikipedia.org/wiki/Worse_is_better)

~~~
tboyd47
Yes, I am also a web dev, and this does describe the state of the industry as
I see it.

Just wondering, but why Go? I ask because Go doesn't have objects, so why not
use a regular OO language like C#?

~~~
szabba
Go has objects and methods. What it lacks is inheritance.

~~~
waps
Go has inheritance, including confusing method resolution orders and
everything :

[http://golang.org/ref/spec#Struct_types](http://golang.org/ref/spec#Struct_types)

("AnonymousField" is the name they give to it. But it's inheritance. Badly
executed and doesn't fit well in the type system, but that's par for the
course for Go)

I don't get where these statements keep coming from. Well, I do, to an extent
: the Go authors keep claiming these sorts of things, but I don't understand
why. They're not true. There is no complex language feature that Go does not
have. Go has inheritance, Go has generics (simply inaccessible for everyone
except the language authors, like in modula-2), Go has polymorphism (simply
inaccessible for "mere" users). Go has return-value polymorphism. Go has type
parameters, both in types and functions (again, inaccessible for users of the
language, but it's there).

Everything is simply built as custom compiler code that's inconsistent and
riddled with unexpected edge cases, none of it worked into the type system
itself. This is the real reason for resistance against these language features
: working them into the type system would be a necessity, and isn't possible
without breakage because they're not consistent.

------
programminggeek
I think OO is successful because people by our very nature love to name and
categorize things. The way most people see OO, it is ALL about
naming/categorizing and inheritance. Things belong to other things and are
related to other things and so on and so forth.

Academia especially loves naming things and creating structure and hierarchy.
OO makes programming sound scientific, like physics or biology or something.

The business world loves the idea that you could have these reusable objects
that would let you just fit a program together like legos. Given the huge
technology investments, it sounded like a great idea, but it hasn't really
worked out that way for the most part.

It sounds silly to think that OO really took off because it feeds on the
psychological need to name things, but I think that's a big part of it. Even
the Bible says that Adam's first and primary job was to name all the animals
of the earth. Even if you aren't a believer, a book written thousands of years
ago and handed down from generation to generation where it explains that the
first man's job was to name things I think speaks a great deal about how
humans are obsessed with names and naming things and have been for a very long
time.

In that way, it wouldn't at all be surprising that OO would be successful. If
anything, things that take away the ability to name and organize things in a
way that people enjoy would be something that fewer people take up unless
there is some other psychological benefit that is larger than the benefit of
naming and organizing things.

------
andrewstuart2
I agree that object orientation is a psychological success, but I think it's
more for linguistic reasons than anything else. Our speech and thinking are
organized into things, attributes, and actions.

"The stove heats the water." An object performing an action on another object
(a direct object, in parts of speech), changing the state of the direct
object.

So when we code this, it makes sense naturally to write something that is
readable in human terms:

    
    
        stove.heat(water);
    

This makes code easier to reason about and easier to go back to later.
Hopefully, our code reads like the computer- and human-readable literature it
is.

~~~
themartorana
I think this is why Elixir's pipe syntax spoke to me while hacking on it
yesterday. It puts the subject first, which has that human-readable bit to it.
I've been writing code in myriad languages for 15+ years, but that doesn't
make me impervious to what seems "obvious" or intuitively "proper" to my
brain, organization-wise.

This organization of language in this fashion is English-specific though. I
wonder what feels intuitive to non-English (or non-Romantic) speakers - is it
OO as well, or something else?

~~~
seanmcdirmid
All natural human languages are noun based having subjects and objects (in the
POS sense). Verbs never stand alone.

~~~
sbov
Some languages put verbs first though. In which case it seems like functional
programming languages might make the most sense to them.

~~~
_asummers
Which spoken languages do this, out of curiosity?

Edit:
[https://en.wikipedia.org/wiki/Verb%E2%80%93subject%E2%80%93o...](https://en.wikipedia.org/wiki/Verb%E2%80%93subject%E2%80%93object)
Cool.

~~~
hrjet
Example of verb first: RTFA :)

~~~
randallsquared
Implied subject first: [you] read the fine article.

------
hyp0
OO = ADT + inheritance.

And inheritance is problematic.

[ADT is Abstract Data Type; i.e. values that support operations, and are not
tied to a specific concrete implementation]

------
marcosdumay
That's not a case for OOP, is's a case for interfaces and encapsulation. Both
were already widespread when OOP was created.

~~~
seanmcdirmid
In 1963 (SketchPad)? Or 1967 (Simula)? So when OOP was created in 1967, I'm
not sure modules and interfaces were in wide spread use, considering that they
didn't really even appear until the 70s (definitely not in Algo 68).

------
KillerRAK
Building systems, specifically software systems, is all about creating
analogies. OO lends itself well to this reality.

------
beagle3
OO is not well defined - what's OO to one person is not OO to another, and
everyone seems to like _their_ version of OO but not others.

Obligatory ref:
[http://www.paulgraham.com/reesoo.html](http://www.paulgraham.com/reesoo.html)

~~~
jghn
Exactly.

Recently I was showing my coworkers how the type class pattern works in Scala,
and was arguing that this was _very_ OO, just not OO the way someone from the
Java/C++ branch family tree would think of it - but rather more along the
lines of what you see in CLOS, Dylan, R's S4, etc. They weren't buying what I
was selling, but I'm still selling it.

------
wisienkas
For learning programming concepts and everything I started out with java (OO
language).

(I still love java I just don't feel like OO is the only languages to get
things done)

Java(2.5 years experience) is fairly easy and it has great IDE support etc.
And the template thing in OO with a house has a floor and a roof and 4 walls
etc. is fine.

But can't you do the same in C (which I have recently started to learn quite
intensely)? I know you have to do a little more code but it isn't that much
really and C gives you great control over anything really, you can also have
it organized into files and such in C just as fine at you would have done in
java.

\- If I'm complete off please tell me

~~~
rbrogan
That is correct, in the abstract. There can be programs written in both C and
Java that do the same thing. The question is then: if people are writing
programs in Java, what is the cause, what is making that happen?

~~~
jghn
Presumably the cause is that their boss told them to use Java.

Now why did their boss tell them that ... Often these decisions have all sorts
of legacy issues involved which don't make the real reasons all that clear cut
- and if one were to really dig back the initial answer is likely "because
that's what people say we're supposed to use"

------
mamcx
Something forgotten: Visual Basic. Plus some smaller but still know players
like Visual FoxPro, Delphi.

And the MS evangelism of OO and "flavor of the year arch that also is OO" that
was central to their conferences.

So: The most popular language, and the others, and java, all OO.

Outside university, that was the landscape. And inside the university? All
tech there give a bad taste, and the impression of be bad, ugly, slow, bad,
ugly, slow. Instead, Delphi, Fox, Visual Basic: DELIVER. And them were OO, so
OO is the thing.

------
protomyth
I think it has a much simpler reasoning. Most experience programming GUIs were
in object oriented languages with the original Macintosh showing the problems
of not using one. It fit because it grew up with the concepts. The 1990's was
the era of the new UI (e.g. Windows). If Microsoft had said that functional
programming was the way to go, then OO would not have taken off. Instead C++
and Java with Sun's amazing marketing won.

------
jordanpg
To the extent that anything is "wrong" with OOP, it is encapsulated (!) well
by the Reactive Manifesto [1].

The global computing paradigm is outgrowing the domains where OOP is useful.
And I'm pretty sure that if you look closely enough, there will be OOP under
the hood of a great many systems for a long time to come.

[1] [http://www.reactivemanifesto.org/](http://www.reactivemanifesto.org/)

------
zzzcpan
Managing complexity, large projects, large teams and all that were just fine
with C. And neither Java nor OOP were able to improve on that. If anything,
they only made it worse. So, I don't think popularity of OOP is a success.
Certainly not for software engineering.

------
y1426i
All the details of OO aside, it for a first gave a way of managing the
complexity for large projects. It allowed for large number of people to come
together to write one piece of software and a way for putting it all together,
testing, debugging.

------
hasenj
I think the key to the success of Java's flavor of OO is providing an
efficient implementation of dynamic function dispatch. This is the core
feature, without it, OO would be useless.

The same feature can easily be achieved in functional languages (including
javascript), but its performance doesn't scale well at all.

This is the case even in javascript: attaching dynamic functions to object
instances performs significantly worse than relying on a prototype chain.

When a function is defined in the prototype chain, it's defined once and
shared across all instances.

In Java, when you have a reference to an abstract class, and you call
`object.method(args..)`, you don't know which function is being called. It's
somewhat like if you have a function pointer/reference and called
`method(object, args...)`.

Java lets you do this without losing all the benefits of static type checking.

------
Yokohiii
I have been arguing against OOP with my peers for years. The last resort
argument is very similar: "OOP is a form of communication between humans and a
mental concept".

Maybe we should delete OOP and just talk UML?

------
bane
OO is successful because at the time (when software engineering was really
growing into a proper engineering discipline) large engineering efforts needed
something _anything_ that would improve the management and development of
large-scale software efforts with millions of lines of code and hundreds or
thousands of developers working in different time zones all contributing to
the solution. OO has always been a solution of _management science_ for
software engineering, not Computer Science.

One of the things that's very hard to understand is how much large efforts
rely on our ability and understanding to manage and coordinate the effort of
all these people providing work input -- yet management science is still
remarkably poorly understood.

OO is nice because objects define work-units which are easy to measure and
track through a management structure and the development of which kind of map
to a work breakdown structure in much the same way assemblies and sub-
assemblies go into a bridge or a car in traditional engineering:

Phase 1:

Part A: develop all the classes and methods on each class.

Part B: test and certify components.

Phase 2:

Part A: integrate 1A components.

Part B: test and certify integration.

Phase 3: repeat phases 1 and 2 until you have finished software

"oh team X is working on component (class) A2? when does A2 move to testing?
okay, I'll schedule that. You know component B6? Yeah, it failed testing and
needs to be reworked, but it shouldn't hold up phase 2: integration."

Lots of people give Java and Java shops lots of flak because the work is
unglamorous, but they've semi-succeeded in mapping software engineering to the
management science practices that were formulated during the Industrial
Revolution...meaning they've turned software development into assembly line
work.

This is why good software developers hate working in these places. It would be
like a fine watch craftsman going to work on the Timex digital watch assembly
line at Foxconn.

The Management Practices that informed OO and drive it are _the_ dominant way
to "make stuff on schedule" that humans have managed to come up with. Like it
or not, it's why cars get made, t-shirts end up in stores for $10, and your
state DMV has an on-line renewal site.

It's interesting to study large software projects that failed as well. SAIC
has a couple of well known ones: Virtual Case File [1] and CityTime [2],
Healthcare.gov [3] was another good example. It's even more enlightening to
compare these failures to big traditional engineering failures: Seattle Tunnel
[4], Space Shuttle failures, etc.

One thing to keep in mind. Large software projects are often more complex on
raw unit count than even the most complex engineering efforts in history. For
example, the Space Shuttle is considered one of the most complex pieces of
engineering ever, and it only had 2.5 million parts. [5] OS X 10.4 by
comparison has almost 100 million lines of code. [6] It's not a perfect
comparison of course, but it does help cement in the magnitude of modern
software efforts.

If OO has a failing it's that humans haven't managed to come up with a better
way to break complex work down or to define software methodologies that map
well to how we know how to manage and coordinate complex tasks. OO works
_very_ well when considered in this context, it does more or less what it's
supposed to. It's not intended to provide a methodology for elegant or
creative software crafted by a single person or small team of experts.

We've learned a lot in the decades since OO has become the dominant force in
software organization. And Management Science has improved a couple tick marks
as well. The next paradigm shift is likely close. But make no mistake it'll
probably be a streamlined OO-like system of trackable, schedulable work
components. But it won't happen quickly, literally _trillions_ of dollars in
work effort have been put into OO from the software to the management
practices.

1 -
[https://en.wikipedia.org/wiki/Virtual_Case_File](https://en.wikipedia.org/wiki/Virtual_Case_File)

2 - [http://washingtontechnology.com/articles/2011/10/25/saic-
sac...](http://washingtontechnology.com/articles/2011/10/25/saic-
sacks-3-citytime.aspx)

3 - [http://www.cio.com/article/2380827/developer/6-software-
deve...](http://www.cio.com/article/2380827/developer/6-software-development-
lessons-from-healthcare-gov-s-failed-launch.html)

4 -
[http://www.popularmechanics.com/technology/engineering/infra...](http://www.popularmechanics.com/technology/engineering/infrastructure/bertha-
seattle-tunnel-boring-machine-still-stuck-17517311)

5 -
[http://spaceflight.nasa.gov/shuttle/upgrades/upgrades5.html](http://spaceflight.nasa.gov/shuttle/upgrades/upgrades5.html)

6 -
[http://www.informationisbeautiful.net/visualizations/million...](http://www.informationisbeautiful.net/visualizations/million-
lines-of-code/)

~~~
tboyd47
Why couldn't we have achieved this assembly-line production system with
procedural programming? Just separate the project into files, assign each
programmer to a file, say, "Write a function called X, that takes input A, and
returns output B," the manager writes the main() function that pulls them
together, and there's your assembly line. It may be a dumb question, but I'm
honestly asking.

~~~
bane
In a sense you do. Most OO languages like Java _are_ procedural languages as
well. But OO offers the "advantages" of bundling everything in a given work-
unit together. While something like C may leak state or names all over the
place and you end up with similar/same methods being improperly used and that
all needs to be tracked or disambiguated somehow.

Lots of people consider namespacing and packaging up your source well as a way
to do this. I think that works up to a point, but once you start hitting
_really_ big projects, I think the way that Java enforces and cleans up some
of this optional stuff and makes these things as part of the language makes it
a _little_ more natural. Instead of having to be careful about your work
(meaning if you aren't something will go wrong), you simply _have_ to do the
best practice to make progress. On smaller projects, where the manager _can_
review every commit, every line of code, every way that work was done, and can
enforce a cohesive and singular idea about how it should be done, it can work.
But once the scope expands beyond what one management drone can handle it
starts to become something that falls into the next layer of Management
Science abstraction.

tl;dr Making things that are optional in some languages into requirements (or
the code won't work) seems to have been a good optimization.

------
rakshak
Machine learning is the next OO. Where you write code that can learn and adapt
to changing conditions. I think code will be more of a poem than a recipe.

------
philippeback
There is more than meets the eye in OO.

First, most languages aren't doing OO the way it was described by Alan Kay who
coined the term.

What we have are data structures with procedures that invoke other procedures
in data structures.

That's a long way from independent units sending messages to each other. REST
on the web these days may be closer to that initial intent.

Not that it means anything special, but I've been busy with this "Object"
thing since 1990, trying to figure out what it meant exactly.

C++ (Symantec C++, MS Visual C++ with COM/ATL, Delphi, Tcl with Xotcl, Java
with the dreaded EJBs shipwreck, Smalltalk, R, C (including Tuxedo services),
Assembly, Javascript, and other languages like Clipper/xBase have been used to
deliver client solutions.

A couple of the are not OO at all. Still they worked too.

For the record, I've also given OO related trainings to more than 2000 people
over the years (language side, analysis side, system design side) and seen
people "getting it" or not. I've been involved in UML (and OMT/Booch/...
before that).

At this point in time, a lot of that boils down to being able to partition a
system properly based on responsibilities and the team that is available to
build the system.

Fred Brook was right, there is no silver bullet.

"OO" designs were quite apt to adjust to changes that were quite probable. But
some changes just made them break.

For some systems, the OO approach was really good for remembering what was
done. I remember being called to fix an issue in a system that had been
running in production for, like 13 years (it was C++). It was easy to figure
out how things were laid out, as the pieces were still identifiable.

One lesson is that objects shouldn't be too fine grained. Too many objects and
you get the "ravioli" effect.

Good architecture and good coverage of key concerns is what matters. Once this
is done, technology can be bent to do one's will.

I do have a personal affinity to objects but with functions and closures, one
can get other ways of thinking.

But there is something that has been all wrong for years, namely the fact that
a lot of OO languages required to have hierarchies for polyphormism to work.
And that's just wrong, and that's why I like Smalltalk for example with the
messaging at the core. That's what made Tuxedo apps successful for transaction
processing. Messaging, async, ability to decouple. That is more key than
structures. And that's been lost for a long time.

Javascript gives more in that regard, despite it being quite a kludge of a
language. But it is flexible and has closures, more than a lot of "OO"
languages.

I don't thing OO is what made Java successful as OO designs in Java are quite
poor and over complicated. Spring gave some sanity back but the core thing is
that Java has libraries for a lot of things. Not that it is OO.

My take is that OO is going to be less of the dominant way, and we are
entering the age of the polyglot programmer.

