
OO languages spend most effort addressing a minority use case - film42
http://250bpm.com/blog:59
======
jokoon
The issue is that there are so many ways to do one thing with OOP. Programmers
tend to add complexity when they can do OOP, so it's not the fault of OOP, but
it's hard to teach good practices. In a way the problem is that OOP is too
permissive. When you use a language, a nice feature is readability, but it's
because the language encourages some practices.

In that regard, C++ is fine. Java is not.

Also a bothersome issue is people arguing that OOP is reusable, so that they
invent all sorts of things that will be extendable, and then you see their
code never being used ever again.

My opinion: don't teach OOP, teach software design. PLEASE.

~~~
Hermel
> In that regard, C++ is fine. Java is not.

You realize C++ is much more permissive than Java?

~~~
jokoon
java encourages OOP. C++ only allows it.

------
skrebbel
What a straw man. Saying that OO is all about inheritance is like saying that
a Tesla cars are all about having a big touch screen dashboard.

~~~
twic
The best definition of object orientation I've come across recently is this
one:

[http://wcook.blogspot.co.uk/2012/07/proposal-for-
simplified-...](http://wcook.blogspot.co.uk/2012/07/proposal-for-simplified-
modern.html)

Which identifies two defining attributes of objects as a particular kind of
value: they are collections of behaviours, and invocations of their behaviours
are dynamically dispatched. Inheritance isn't required; it's polymorphism
that's important, and inheritance is just one way to approach polymorphism.

~~~
srean
In what way would that be different from the use of simple closures. Given
that they predated them by a couple of decades am interested in what new or
different idea (or even practice) that OOP brings to the table. My question is
sincere and genuine.

~~~
twic
Well, Guy Steele suggested [0] that

> A closure is an object that supports exactly one method: "apply".

In which understanding, what objects add is the ability to have multiple
methods associated with one bit of closed-over state.

Now, you can do that without objects by creating several closures in one
context and putting them all in a record to give them names. But at that
point, you've just implemented objects. Hence the famous aphorism quoted in
Steele's email: _objects are a poor man 's closures; closures are a poor man's
objects_.

[0] [http://people.csail.mit.edu/gregs/ll1-discuss-archive-
html/m...](http://people.csail.mit.edu/gregs/ll1-discuss-archive-
html/msg03269.html)

~~~
srean
I prefer the second idiom: Create a tuple of different functions that are
closed over a common state. You get privacy and data abstraction for free.
This is nothing new or ground breaking, neither in theory nor in practice.
This idiom will be familiar to everyone taking a 101. So I am yet to find what
is it that OOP brings to the table. Some OOP fans seem to have a large chip on
their shoulder, as if they saved the world. Want to understand what's all that
about.

~~~
twic
When you create a tuple of different functions closed over a common state,
that _is_ object oriented programming. If that's something you find useful,
then you _already understand_ what object oriented programming brings to the
table.

~~~
srean
But that's my point, this has been a common practice decades before OOP
fanboyism came to town telling everyone they have this new solution for all
software problems of the world. And all they were selling as this _new_
paradigm was not only old hat but a special case among the many in which
closures can be useful. One uses it when it matches the problem domain. The
overblown fuss around it and raising it to the grand position as _the_
solution to everything, and the assertion that you are doing it wrong if your
code is not OOP, keeps me permanently mystified.

~~~
seanmcdirmid
That never happened though, not in the 80s/90s when OO was considered new
(everyone had experience with objects before they were called objects). It is
something people imagine a lot, probably so they can feel more smug hating on
OOP. But really, any new technology will attract clueless fanboys who think
they are working with something brand new (again, to support their smugness in
pretending to be a pioneer). And let's not get started with the FP fanboys,
who think they are "doing functional programming" when it's obvious from
looking at their actual code that they are thinking in terms of objects and
don't really get FP at all.

~~~
srean
Agreed in full about FP fanboyism, and exactly as you said, some write Java
code in their shiny new language.

------
nstart
When I started using python I wondered why classes were not so prevalent. I
was just coming out of several years of C# (I had scaled the MVVM WPF
mountain. Interfaces everywhere). Since then I realize that I rarely use
classes except to store a basic model. And then, unless there's logic around
how to convert a value from one thing to another (full name by combining first
name and last name as a simple example) I've started moving things like
validation and non basic business logic into their own modules. This makes
using it across systems easier and allows me to scale the complexities better.
Almost no OOP used basically. This has been pretty liberating as far as
development and testing goes.

(Of course in 2 years I'll understand the cons of this approach and will think
back on these times and wonder how I could have been so naive. :D )

------
rikkus
Just because a language supports OO doesn't mean that you are somehow trapped
using only the OO paradigm. OO is a feature that gives you, amongst others,
the possibility of expressing an inheritance relationship.

Most code I write in C# is OO in that it leverages encapsulation, but is
generally written in a functional style (thanks LINQ) and uses inheritance for
maybe two out of a hundred classes.

Most popular languages now are multiparadigm, so there's little point in
criticising particular paradigms as if we're somehow stuck working with them
and them only.

~~~
calibraxis
You’re trapped when coworkers use (and demand) OOP, not to mention libraries
which require it.

Like when your coworkers become religious about the ORM ("How DARE you write
SQL!") when the database people actually have the superior model (relational)
to the programmers (OOP).

Also, many languages have an explicit critique of OOP. Take Rich Hickey's
discussion of how OOP complects a bunch of features which can be available a
la carte.

~~~
rumcajz
Link?

------
graycat
Sure. I've recently finished writing 18,000 programming language statements in
Visual Basic .NET on 80,000 lines of typing, including comments and blank
lines, and so far I have never even once used inheritance, either single or
multiple. I don't even know how they would handle naming conflicts with
multiple inheritance. Maybe Microsoft's many classes in .NET are making heavy
use of inheritance, but I don't.

So, inheritance is okay, but I don't use it directly. It just isn't a tool I
use; it stays in the bottom of my toolbox.

I use classes much like structs in C or structures in PL/I. Yes, classes are a
little more general, and I do make use of some of that generality.

But, PL/I structures are quite a bit more general than structs in C and,
really, nearly as useful as classes. And, for "addressing" as in the OP, the
array addressing in PL/I structures is much more efficient than addressing in
classes.

The OP is correct: I don't think of classes and inheritance as _representing_
a is-a relationship. Sorry, I just don't do that. Don't need it. Or, I'm big
on collection classes (hopefully based on AVL or red-black trees) and did
write my own key-value store for session state storage for my Web site, and
that usage can be regarded as using is-a, but, again, I don't get is-a from
class inheritance -- just don't need that.

And the OP is correct: A lot that is in my code has its meaning clear only in
the comments. The code is like the displayed equations in a calculus or
physics book, and the text between the displayed equations is like the
comments. Both the text in those books and the comments in the code are
crucial.

Yes, the _meaning_ of the code is crucial, but for _meaning_ I want to use a
natural language, e.g., English, in the comments.

~~~
rumcajz
But why not use assembly with a lot of comments then?

Higher level languages aspire to take at least some of the "meaning" stuff
that would otherwise be in comments and represent it using in-language
constructs.

~~~
graycat
Good observation.

Likely the answer is, again, similar to what is done in a calculus or physics
book: E.g., in a calculus book, there's a lot behind the Riemann integral, and
in physics, behind the electric field, so in those books we don't express
everything as just simple arithmetic with comments.

And, yes, in my code, when I write a function or subroutine, I give it a
mnemonic name and have it do some work that has _meaning_ that is described in
the comments.

But here is for me a telling point: In grad school, my best math prof just
stated that math is written in complete sentences. So, the mathematical
symbols do not replace the natural language, e.g., English in the sentences,
paragraphs, sections, chapters, etc. Usually good mathematical notation has
some mnemonic hints of the meaning, but, still, the meaning is in the text,
not the symbols. So, net, there is limited utility and future in trying to
have programming language syntax replace the English language for
communicating the meaning.

------
_pmf_
I've always wondered why (implementation) inheritance is a first class
construct while delegation has to be painstakingly implemented manually.

~~~
jwdunne
I guess you could do this in a higher-level language like Ruby, though you
either have to use a mixin or monkey-patch a core class, which isn't as clean
cut as a first-class construct. With Lisp, you could just create the construct
if it isn't there already.

No such luck in Java et al.

------
tonyedgecombe
>>OO folks are wasting their time discussing minutiae of their little peculiar
use case, such as, say, single vs. multiple inheritance, while at the same
time nobody is paying attention to what is needed in majority of cases.

Maybe they were fifteen years ago, I don't think that is the case now, the two
most popular OO languages (Java and C#) don't even support multiple
inheritance.

~~~
_ZeD_
well, with default implementation in interfaces, now java support (something
resembling) multiple inheritance

~~~
Forlien
I think you and tonyedgecombe have different definitions of support. In my
understanding, default implementation was included to allow adding methods to
interfaces without updating the classes that implement them. You can use this
feature to do something like multiple inheritance, but I haven't heard anyone
recommend it.

~~~
_ZeD_
Well, I hardly know anyone recommend multiple inheritance, whatever the
language is :D

------
sklogic
Held me up until a C++ rant. Among multiple ways of using C++, an OOP way is
the least important. It is not an OO language, so most of the legitimate anti-
OOP criticism simply does not apply to C++.

~~~
javert
> It is not an OO language

That seems really outlandish and obviously flat-out wrong. I assume, then,
that you have good reasons to have this contrarian position, and I am really
curious to hear what they are :).

When I think of C++ I think of "C with classes." In other words... C with
objects. And... a bunch of other stuff tacked on over the years. Which is not
to discount that stuff---maybe that is what your argument rests on; if so,
that's fine.

~~~
bluejekyll
C++ can be OO, but isn't always OO. If OO is defined as having polymorphic
runtime function invocation, then C++ allows you to opt-in to this behavior
with 'virtual' and heap allocation, vs. Java which requires you to opt-out
with 'final' and all objects are always heap allocated.

My biggest thing in support of OO is that it encourages DRY more affectively
than other paradigms, IMO. But I am finding Rust's balancing of functional w/
OO allows for you to pick the best method to DRY (don't repeat yourself).

Anyone who doesn't practice DRY programming doesn't understand the cost of
maintaining production code and technical debt.

------
agumonkey
His articles about C vs C++ are also interesting

[http://250bpm.com/blog:4](http://250bpm.com/blog:4)

[http://250bpm.com/blog:8](http://250bpm.com/blog:8)

~~~
albinofrenchy
He seems to be throwing the baby out with the bath water here a bit.
Exceptions are a failure and a bit of a mine field in C++; I have no arguments
there but you also don't need to use them.

The list example is a bit contrived too. A simple templated class for the list
node could give both encapsulation and the exact same speed and allocation
characteristics as the C code.

~~~
agumonkey
Yeah, I should have mentioned to read the comments, lots of good points raised
there to balance things.

------
pdkl95
This is one side of the Expression Problem.

[http://c2.com/cgi/wiki?ExpressionProblem](http://c2.com/cgi/wiki?ExpressionProblem)

------
titzer
Most OO languages are incorporating features from functional languages and
generic programming, so the author's rant is kind of misdirected. Maybe he's
talking about Smalltalk or Java 1.0?

Besides, in OO languages, people are coming around to the idea of favoring
composition over inheritance anyway.

So this person is basically arguing against a strawman.

------
singingfish
multiple inheritance is often misguided. Back in the day that was the major
selling point of OO. These days decent languages have composition via roles or
traits.

~~~
TeMPOraL
So basically the "progress" of OOP wrt. multiple inheritance looked like this:
first we throw it away because it's too dangerous, then we realize it's
actually needed for good architecture, so we slowly try to smuggle it in under
different names ("traits", "roles", "mixins") hoping nobody will notice.

~~~
agumonkey
Too much sarcasm. MI was too general, to the point of being more harm than
good, so it had to be toned down to find the sweetspot.

~~~
TeMPOraL
I'm being sarcastic not because I don't understand the rationale, but because
of the dogmatic approach of the loudest proponents of those changes.

------
paxcoder
I can't make myself read something which has such an evidently wrong title.
Must we ignore reality to push our alternative paradigm nowadays?

~~~
bantunes
You should try to be open to it, OP tries to make a point and not just rant.
Typical OO structures don't fit the way some (small majority?) of programmers
think. Is becoming proficient at OOP an exercise in brain rewiring? Or should
we reject it, since it doesn't come naturally? Worthy of discussion and not
just outright dismissal IMO.

~~~
yxhuvud
The problem is that he really doesn't have a point - only a straw man. He
doesn't like inheritance, but he likes message passing. Well, guess what -
Alan Kay, the founder of the OO paradigm, also consider message passing to be
the most important part.

This is at most a rant against the misguided implementation in certain
mainstream languages. Not against OO in general.

~~~
rumcajz
True. But in reality, when you hear term "OO" people mean Java and C++, not
the orginal Alan Kay's concept. Sad but true.

~~~
TheOtherHobbes
When you look at books that introduce OO, and languages that incorporate it,
it's usually presented as is-a and extends and inheritance.

The fact that Alan Kay meant something different is irrelevant.

While you can build messaging and composition into most languages, they're
usually considered application models in their own right, not core features.
(Objective C is an exception, among a handful of others.)

So I think it's a valid question - why do mainstream languages still present
this model _as a core feature_ when other models are less rigid, more
expressive, easier to work with, and better candidates for language
fundamentals?

------
tlarkworthy
Eh? Holding a reference is the arrow

~~~
nstart
The point here (pun intended) is that reference is not an OOP specified
feature. Polymorphism is.

