
Ask YC: What exactly is so hard about OO? - bporterfield
I've been feeling a little confused lately, given the litany of blog posts heralding the return of procedural programming or detailing how the layman programmer incompetently uses object-oriented languages.<p>To me, the concept seems somewhat straightforward, so I have to ask: What exactly is the difficulty people have with grasping OO programming and using it to construct cleaner, more maintainable code? What are the common mistakes? Am I unknowingly using OO wrong?! Are we ALL??<p>I think that like any programming technique OO has it's place, and certainly not useful in many circumstances. I also don't think that it's an incredibly complex concept for a programmer with a little experience, and can be quite useful when implemented properly. Please correct me if I'm wrong.
======
lisper
The confusion about OO is due entirely to people's failure to apply Ron's
First Law: All extreme positions are wrong. What happens is that someone
stumbles upon a new technique that seems to be good for something, so they
decide that it must be good for everything, and when they realize that it's
not good for everything (typically ten years into the process) they decide
that it was all hype and it must be good for nothing. So everyone stampedes
off to find the Next Big Thing, whatever that is, and the whole sorry cycle
starts all over again.

It's particularly amusing as a Lisp programmer to watch all these fads come
and go because none of these ideas are new. The OO features of Java and C++
are just tiny subsets of the functionality in CLOS. XML and JSON are nothing
more than (bad) re-inventions of S-expressions. Aspect-oriented programming is
just a hack to get around the fact that Java doesn't have macros or
multimethods. And on and on it goes.

The right answer, of course, is that OO (and every other programming
technique) is good for some things and not other things. Figuring out what
things a particular technique is and is not good for is a big part of
mastering the art of programming.

~~~
nostrademons
+1 to your main point, but I had to respond to this side point, since I see it
mentioned so often by Lispers:

"It's particularly amusing as a Lisp programmer to watch all these fads come
and go because none of these ideas are new."

The takeaway from this is not that programmers are stupid and we should all be
using Lisp, but that user interfaces matter. Social conventions matter.
Installed bases matter. After all, most of Web 2.0 is just fancy rounded-
corner reimplementations of 30-year-old UNIX utilities.

C succeeded because in one critical area - creating a fast, responsive UI on
the hardware & compiler technology of the early 1980s - it worked and Lisp
didn't. C++ and Java succeeded because programmers could take their C
knowledge and syntax and apply it.

XML succeeded because it could leverage everyone's knowledge of HTML. HTML
succeeded because at it's heart, it's just text - you could start typing a
plain old text document and it was valid HTML. A good part of JSON's success
is because every JSON document is also legal JavaScript and legal Python.

Aspect-oriented programming generates interest because it's in Java, and
people already use Java. And on and on it goes...

On purely technical grounds, the Lisp solution from 25 years ago is almost
always better than the modern-day solution we're just rediscovering now. But
the modern-day solution has the benefit of _building on top of all the social
conventions that have arisen in the last 25 years_. That's worth remembering,
particularly in a forum devoted to entrepreneurship. If you tell your
customers "our solution is better, but you'll have to throw away everything
you already know and start from scratch", they probably won't remain your
customers for long.

~~~
lisper
Absolutely correct.

Over on the Lisp lists I sometimes make the point that Lisp failed precisely
because it was so powerful that one person could be productive enough to get
useful work done by themselves. As a result, Lisp tended (and maybe still
tends) to attract people who don't work well with others, whereas if you
program in C you have no choice but to work as part of a team if you want to
get anything done at all. At the end of the day teamwork wins, even when
hobbled by inferior tools.

~~~
DanWeinreb
That's an amusing idea, but it is not the case. I used Lisp extensively at the
MIT AI Lab, at Symbolics (I was a founder), and now at ITA Software. The
people worked together as teams very well in all places. The fact that you're
using Lisp does not change the need to work together on design, conventions,
architecture, etc, not to mention code reviews. The only top-grade hacker I
ever worked with who could not work with others was using C++ (not that I
think it matters).

~~~
Hexstream
"I used Lisp extensively at the MIT AI Lab, at Symbolics (I was a founder),
and now at ITA Software. The people worked together as teams very well in all
places."

That probably tells more about the people and environment of MIT AI Lab,
Symbolics and ITA Software than the team dynamics of Lisp.

~~~
lisper
I don't claim this to be a hard and fast rule. It is obviously not impossible
for Lispers to work together. My theory is just that Lisp _tends_ to attract
non-team-players more than C does, and the macro effect of this is that in the
main C wins. There will, of course, be exceptions.

------
Zak
_Programming_ is hard. Learning to do it well requires above average logical
thinking and a significant amount of study. Good use of OO, FP, pointers,
recursion, data structures and concurrency are _all_ hard. Most people here
will have limited experience with something on that list and will likely
consider it hard.

The idea that programming is not hard or that X will save us from the fact
that it's hard, where X is OO, FP, structured programming, DSLs or even Lisp
is an insult to the trade. Worse, it applies pressure to stop advancing -
after all, X is going to make this easy any day now.

~~~
maxklein
Programming is easy. But most people are not interested in learning it because
they have other things they would rather do. But of those interested in
learning it, they come forward quite quickly. As opposed to those trying to
learn physics or math, both of which are hard subjects.

~~~
icky
> Programming is easy.

"Green field" programming is easy. Hell is other people's code.

------
arockwell
There's nothing intrinsically wrong with OO programming. I find that data
tends to map very naturaly on to objects. However, the biggest problem I've
seen with OO is that people are tempted to build over-engineered solutions to
simple problems. Java's EJB 2.0 spec is a great of example of OO design gone
wrong (long story short, you have to implement an obscene amount of
boilerplate code to get even simple things done). Anything that basically
requires you to have an IDE to write your code for you to get off the ground
is extremely counter-productive.

I think Java is particularly bad in this department, since I think it often
really encourages with the "Let the IDE handle it for you" mentality.

~~~
aston
Yeah, but "let the IDE handle it" is a perfectly valid way to code.

When you want to run water to your house, you don't hire someone to design new
pipes for you; you use the ones pipe companies kick out. The boilerplate is
just plumbing your IDE manufactures for you. The only difference between it
and the magic that happens in dynamic languages with stuff like RoR is that
boilerplate code is explicit rather than implicit. I prefer the former,
truthfully, so I don't have to understand the whole stack to work on a small
piece of the code.

~~~
Zak
Good abstractions don't require you to understand the whole stack. Effective
use of a function, for example requires you to know:

* What the valid arguments are

* What will be returned

* What, if any side effects calling the function will cause

* (sometimes) What resources the function will utilize

Use of most other data types is simpler. Your framework or library should give
you this information in the documentation. If your development environment is
reasonably friendly, it will give you the arguments, and in many cases, you'll
be able to guess the rest of it based on the function name.

If you have to RTFS to figure out how to use your libraries, something is
wrong. Institutionalizing that by always putting the source right in front of
you is a Bad Thing.

~~~
aston
I'm not arguing against abstraction. I'm arguing that explicitly wiring
between components is better than implicit.

To make this concrete, take the example of implementing getters for a class in
Python. If my class has four fields

    
    
      class GetterDone:
        a = 1
        b = 2
        c = 3
        d = 4
    

I can do something like this inside the class to implement getters:

    
    
        def get_a(self):
           return self.a
        def get_b(self):
           return self.b
        def get_c(self):
           return self.c
        def get_d(self):
           return self.d
    

Or I can do something like this:

    
    
        def __getattr__(self, attr):
          if attr.startswith("get_"):
            return lambda: getattr(self, attr[4:])
        raise AttributeError
    

The latter does a good job of eliminating boilerplate, but at the cost of
making it near impossible for me to find out how get_a() is implemented
(assuming this was buried in a huge codebase).

~~~
DougBTX
Assuming you know the core Ruby library, would you call this explicit or
implicit?

    
    
        class GetterDone
          attr_reader :a, :b, :c, :d
        end
    

Rather than being a runtime attr.startswith check, this will generate actual
"return self.foo" methods in the class.

Think of this as generating code using a UI, except you record the minimal UI
commands, rather than putting the full output into your source files. The
commands themselves are still explicit, it's just that they contain only the
interesting information with the minimum amount of boilerplate.

~~~
aston
What you're talking about is essentially a syntax specifically for getters, I
think (I don't know Ruby that well). I guess my general point about "too much
magic" goes away when the magic is part of the language, just on semantics.
That is, you don't know the language if you don't know what magic its syntax
does.

~~~
stcredzero
I'm not 100% sure, but attr_reader looks like it's actually a message send. If
it's not, then it could be implemented that way.

~~~
LogicHoleFlaw
Yes, it is a plain old method being executed in the context of the current
class. It creates new class methods which can be used to read, but not write,
instance variables.

There are methods to create read-write and (I think? write-only) properties as
well.

------
dusklight
The big problem with OO is that it is a tool, not a religion. A lot of schools
teach it as a religion. There is nothing special about polymorphism or
inheritance or encapsulated data. There is no purpose in having any of these
things in your code. What does have purpose is to create easily reusable code
that can be cleanly changed when requirements changed. The tools of OO
(polymorphism, inheritance, encapsulated data et al) is one way to achieve
this. But while it is possible to use OO to achieve this kind of code, using
OO does not guarantee it.

I think the problem is further exacerbated by calling some languages "OOP" and
others functional, or procedural, or whatever. Fundentally a programming
language is none of these things. It is possible to program in a procedural
style in java (make everything static or singleton) and it is possible to
program functionally (but the syntax is very cumbersome). Similarly it is
possible to program procedurally in lisp (do, and, etc) and CLOS. Some
languages are more enjoyable to program in one style over another but if, like
a lot of career programmers, you think programming in java means you are
writing good OO code and because you don't care about the quality of the code
you write, you just want to do a good enough job not to get fired, and because
you know someone else is probably going to have to maintain it anyways, you
write the code in the easiest most quickly completed way instead of the best
overall way when considering debugging and maintainence times.

------
yariv
OO may not be inherently hard, but it is often used in overly complex ways,
which makes it seem hard.

The OO patterns movement probably had something to do with it. It's easy
sometimes to get the impression that with OO, you have to learn a bunch of
weird patterns to produce good code, but with FP you don't have this
conceptual overhead -- you just write the solution that feels the most compact
and natural, almost in a mathematical sense.

Where OO does usually make life harder is in concurrent programming. Because
objects have mutable state and hidden fields, it's difficult and often
dangerous to send (and receive) them between processes. In Erlang, for
example, all data is immutable and because it's made of simple (lists/tuples
of) primitives, it's very easy to send it as messages (to processes on the
local and/or remote VM) and to pattern match against it on the receiving end.
This makes concurrent and distributed programming probably as easy as it can
be.

I also find code written in function languages easier to read and debug
because of data immutability. In code written in functional style, you know
exactly where every variable is bound, so it's easy to track down bugs that
cause it to have the wrong value. There's no mystery as to where a variable's
data may be modified, which gives you confidence in the correctness of your
solution.

I don't mean to suggest that FP is perfect and OO is broken. Sometimes when I
write FP code I wish I could grab one or two concepts from OO. OO code can
also be quite elegant as well.

------
tptacek
Nothing. It's simply open-ended enough to argue about indefinitely. The
mistake you're making is taking the arguments seriously.

------
Hexayurt
Here's the fundamental problem with OO: not everything is an object.

Some problems decompose naturally into functions.

Some problems decompose naturally into procedures.

Some problems decompose naturally into objects.

OO has _extremely_ poor fit with functional programming at a fairly deep
theoretical level. Procedural programming is often very useful for doing
synthesis on a data set. Object programming is often very useful for people
doing stuff to their virtual "things". The problem is when languages anchor to
one abstraction or the other at a fundamental level (ruby) programmers tend to
think they have to follow along.

In truth, it's about finding an abstraction (or set of abstractions) that fit
the problem at hand. A lot of what makes OO hard is cases where one really
needs to break down and say "but this bit? this bit is procedural" and having
the experience to be confident of dropping the OO approach at those points.

~~~
DanWeinreb
Yes. That's why you want a multi-paradigm language, in which all of those
methods of writing code are available. The trick is to figure out how to
provide all of them in a way that is well-integrated. I feel that Common Lisp
does this very well. For example, when you want a method to get called, you
don't use a special "send message" operator; you use a generic function. It
looks just like a function call, to the caller; the fact that this call is
doing a method dispatch is part of its internal implementation, rather than
part of its externally-visible contract.

One reason Lisp has survived so long because it is capable of absorbing so
many of the good new programming ideas as they come along. (No, it's not
perfect this way, but it's pretty darned good.)

~~~
stcredzero
What about some sort of clean meta-language framework? Lisp can claim to be
one of these -- it's the only language that is it's own Abstract Syntax Tree.
Smalltalk goes a certain distance in that direction.

I think there is a great need for a multi-language framework, because
different languages have such disparate power depending on what you are
working on. I keep on thinking back to Rob Pike's Google Tech Talk on
Newsqueak. He spends 6 months developing a language that does concurrency at a
high level. After that, he writes a windowing GUI system in two hours.

What we need is the ability to support disparate language semantics and glue
them together easily. It would be great to be able to model your business
logic in Smalltalk, but write the GUI in something like Newsqueak.

Maybe Richard Stallman's original idea with GUILE was on the right track?

------
stcredzero
A big problem with OO is that most GUI interfaces are not a good match. It's
quite natural for people with a GUI builder to start writing a program from
the GUI down. Of course, since people tend to like attaching ideas to things
that are more concrete, like the window they've just made appear. This tends
to produce bad designs from the OO standpoint, however.

On the other hand, if people start writing up and passing around CRC cards,
they tend to produce better designs. My conclusion is that GUIs have too much
baggage from documents. In some important way, they are much worse than 3x5
cards that you can pass around. Once we figure this out, some aspects of
programming will improve.

It will probably always be hard.

~~~
Tichy
Even for GUI I have a hard time seeing why it would not be useful. In the
typical event based environment (windows, frames, list boxes, whatever), how
would you implement it without objects?

I am probably spoiled because I did it for too long, but I find it really hard
these days to get by without objects.

I took up Scheme programming again, and I like it. I tried to keep it simple
and avoid objects, but I find myself missing them all the time. Writing
complex getters and setters with car and cad into deeply nested lists does NOT
seem simpler.

I wish some of the functional programming gurus would write tutorials on non-
object-oriented programming, instead of ranting against OO programming.

~~~
Zak
I think it's generally a mistake to use list structures with position-based
accessors when what you really want is a struct or a dictionary of some sort.
Aside from sane getters and setters, what object features are you missing?
SRFI-1 gives you alists. It's pretty trivial to write a macro that lets you
make constructors with generated accessors. It's not much harder to add
inheritance to that. Generic functions and multiple dispatch can be
implemented using a globle table of closures. Hey... this is starting to sound
a bit like CLOS, isn't it?

~~~
Tichy
Inheritance is not important, but the organisation of data. What if you have a
GUI with say a tree view element, a list view element, and a text element. How
would you create that in an non-OO way? I suppose you could write

var tableData = ... var listData = ... var textData = ...

and then add callback methods (all in the same source file?). But that runs
out of hand real quick. It seems much nicer and cleaner to have a treeView
object that knows it's own data and callback methods.

What if you have nested elements, like a window with those three elements, or
something else. How would you hand that data around?

I have seen the make-struct macro of MzScheme, but once you start using
inheritance with that, it starts getting really ugly, imo.

~~~
cturner
"Inheritance is not important, but the organisation of data"

Good point. I missed this when I first left Java, because in Java you get
these nice classes. Over time I've found I use class structures a lot less, or
in some situations I'll hack them. The other day I was writing a tree builder
where in java I would have had a different class for "node", "attribute", etc,
all extending from element. In my python impl I just had a class "element"
with a string field "e_type" which was "attribute", "node", etc and then I
stuck other stuff into a dictionary in that object as I needed it. Over time
I've found that the extreme brevity improvements combined with blocks of
documentation to describe the purpose of a grouping of code more than make up
for the loss of code-based structure.

Again on the topic of organisation of data - in databases you get a lot of
automatic documentation because the schema abstraction we're used to is so
widespread and accepted. This is one of the reasons I love to stick with
relational databases even though I know that there are good arguments,
particularly in FP communities, advocating more practical forms of datastore.

~~~
DanWeinreb
It's true that when people learn object-oriented programming, and they learn
about inheritance, they tend to think that inheritance is something you should
be using heavily all over the place. It takes some experience to learn where
inheritance is proper and where there are better ways of doing things, such as
delegation.

I don't think OOP is that hard to learn, but I do agree that learning to use
it very effectively in the best and most tasteful way takes time and
experience. But that's true of so many aspects of software design; I don't
think OOP is all that different in this regard.

------
hobbs
One word: indirection.

With each level of indirection, you get one more level of sophistication, but
at the cost of one more level of complexity. OO has tons of indirection
(+cough+ polymorphism +cough+).

Back in the old C++ days, before decent IDE's, I remember tracking through
multiple code files in several different directories, just to figure out if an
add operator had been overloaded - and if so, how. Man, was that a complex
pain!

Lately, with Java, I've found myself in the same situation, but with XML
config files. Some Java developers just _love_ XML config files and often use
them to direct reflective code execution (dynamic language envy). Needless to
say, my IDE's are failing me again.

------
andreyf
_I think that like any programming technique OO has it's place, and certainly
not useful in many circumstances._

Absolutely correct.

 _and certainly not useful in many circumstances_

This is what, infuriatingly, well over 90% of CS graduates are missing.

~~~
gaius
Yes try explaining to "graduate trainee" that a table is not a class and a row
is not an object.

~~~
LogicHoleFlaw
In my university training at least, we did have a thorough understanding of
that distinction. We started with the relational algebra and went from there
to actual database implementation.

------
radu_floricica
It may be simplistic, but I find people try to model the problem in oo terms,
and when designing the code they keep the same model. Something like if there
are persons and cars you make two classes, Person and Car, and use them in
code almost unchanged, but they are _problem_ entities, not necessarily code
entities. In code it may be simpler to just use maps of some sort.

------
cousin_it
My bigger programs tend to be about 60% procedural, 20% OO, 20% FP. The
highest level "business logic" is procedural - global functions, global
variables. Some lower level supporting code is OO, for polymorphism (make
certain kinds of objects interchangeable) and encapsulation (centralized
bookkeeping and cleanup for clumps of variables). Other lower level bits are
functional (systematically build up side-effect-free helper functions from
smaller ones) or a mix. I never use inheritance, design patterns and all that
gook.

------
shaunxcode
I guess I lucked out as my introduction to OO was via the smalltalk course at
the OU(I say lucked out because it was the last year the course was taught;
subsequently being replaced by java.....)

------
cconstantine
My first OO language was Java; I learned it in college. After learning Java I
thought I'd mastered OO.

Last year I learned Lisp, now I understand that I've only now mastered OO and
that Java has some major limits (such as single-dispatch).

There is a real problem in that different people (e.g. the me from college vs.
the me from now) have very different and incompatible ideas of what OO
programming means.

~~~
DanWeinreb
I guess so, but actually I don't think that Java and CLOS have such a
different idea of what OOP means. They just have different specific features.
CLOS has multimethods and multiple inheritance; Java has explicit interfaces.

Frankly, multiple inheritance, and multimethods, are nice tools when you need
them, but I'd say that most of the time, you don't. (Let me clarify that.
Doing what Java does with multiple inheritance of interfaces IS very
important, and it would be very bad to leave that out. What Java cannot do,
namely multiple inheritance of implementation, is much less crucial. And you
can very often get around it by using delegation. See "The Treaty of Orlando"
which asserts that multiple inheritance and delegation have the same inherent
power. (This was formulated at an OOPSLA conference in the 80's; Henry
Lieberman was one of the main authors.)

Anyway, my point is that Java does indeed have limitations, but the
fundamental concepts are very similar to those in CLOS.

~~~
cconstantine
Absolutely. The only real difference I could see is that object methods belong
to classes in java, but methods happen to possibly work with objects in CLOS.

I prefer CLOS over Java because of the list of features, but the point I was
trying to make is that different people (and even the same person at different
times) have different ideas of what OOP is. This leads to funky designs and
all kinds of messes.

------
DanielBMarkham
OO is all goodness, just like FP. But the way it is implemented by some people
can make it impossible to easily understand or modify -- just like FP.

OO is simply organizing your code before/as you write it. Combined with UML,
it can be a kick-ass way of describing your general strategy for dealing with
complex problems. Or it can take something simple and make it into a monster
-- it's up to you.

Another way of looking at OO is that you are building your own language as you
go along by starting with nouns (types) and adding all sorts of verb-clauses
to hook them together. You can do this in a super-cool, easy-to-understand
way, or you can get a bit carried away and try to recreate the dictionary when
you only need a few nouns and a few verbs.

OO's goal is not tight, beautiful, concurrent code. You want to feel like
Picasso or Spock, go write FP.

For all of those reasons, OO just isn't as sexy as FP. You're not writing
something that scales to a zillion users right out of the bag. You're not
doing a lot of meta-programming, recursion, lambda calculus and such. You're
not writing anything sneaky, clever, or bound to impress the other nerds.
Everything just looks plain Jane. Add to that all of the examples of bad
pattern usage and other atrocities in the OO world and I can easily see why
other ways of doing things can seem more attractive.

------
lst
Wrapping your mind around OO (simple or bad or good or perfect as it might be)
may be much harder than create some home-grown, intuitive dynamic dispatching
solution.

The only difficulty with the above approach is team working. In a team, you
have to find common grounds to base your thinking on.

So, I never ever use OO, but maybe the only real reason is: I don't need to
work in a team...

