Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask YC: What exactly is so hard about OO?
25 points by bporterfield on June 3, 2008 | hide | past | favorite | 59 comments
I've been feeling a little confused lately, given the litany of blog posts heralding the return of procedural programming or detailing how the layman programmer incompetently uses object-oriented languages.

To me, the concept seems somewhat straightforward, so I have to ask: What exactly is the difficulty people have with grasping OO programming and using it to construct cleaner, more maintainable code? What are the common mistakes? Am I unknowingly using OO wrong?! Are we ALL??

I think that like any programming technique OO has it's place, and certainly not useful in many circumstances. I also don't think that it's an incredibly complex concept for a programmer with a little experience, and can be quite useful when implemented properly. Please correct me if I'm wrong.



The confusion about OO is due entirely to people's failure to apply Ron's First Law: All extreme positions are wrong. What happens is that someone stumbles upon a new technique that seems to be good for something, so they decide that it must be good for everything, and when they realize that it's not good for everything (typically ten years into the process) they decide that it was all hype and it must be good for nothing. So everyone stampedes off to find the Next Big Thing, whatever that is, and the whole sorry cycle starts all over again.

It's particularly amusing as a Lisp programmer to watch all these fads come and go because none of these ideas are new. The OO features of Java and C++ are just tiny subsets of the functionality in CLOS. XML and JSON are nothing more than (bad) re-inventions of S-expressions. Aspect-oriented programming is just a hack to get around the fact that Java doesn't have macros or multimethods. And on and on it goes.

The right answer, of course, is that OO (and every other programming technique) is good for some things and not other things. Figuring out what things a particular technique is and is not good for is a big part of mastering the art of programming.


+1 to your main point, but I had to respond to this side point, since I see it mentioned so often by Lispers:

"It's particularly amusing as a Lisp programmer to watch all these fads come and go because none of these ideas are new."

The takeaway from this is not that programmers are stupid and we should all be using Lisp, but that user interfaces matter. Social conventions matter. Installed bases matter. After all, most of Web 2.0 is just fancy rounded-corner reimplementations of 30-year-old UNIX utilities.

C succeeded because in one critical area - creating a fast, responsive UI on the hardware & compiler technology of the early 1980s - it worked and Lisp didn't. C++ and Java succeeded because programmers could take their C knowledge and syntax and apply it.

XML succeeded because it could leverage everyone's knowledge of HTML. HTML succeeded because at it's heart, it's just text - you could start typing a plain old text document and it was valid HTML. A good part of JSON's success is because every JSON document is also legal JavaScript and legal Python.

Aspect-oriented programming generates interest because it's in Java, and people already use Java. And on and on it goes...

On purely technical grounds, the Lisp solution from 25 years ago is almost always better than the modern-day solution we're just rediscovering now. But the modern-day solution has the benefit of building on top of all the social conventions that have arisen in the last 25 years. That's worth remembering, particularly in a forum devoted to entrepreneurship. If you tell your customers "our solution is better, but you'll have to throw away everything you already know and start from scratch", they probably won't remain your customers for long.


C succeeded because it was the language of Unix, and it rode to success on Unix's success. C++ did indeed succeed because so many people knew C. Java has very little to do with C, except for the surface syntax, which makes it look as if it is similar to C, but its semantics are not like C at all.

You do not need to throw away everything you know to use Lisp. 90% of what you have spent your time on when you learn programming applies to Lisp just as well as any other language: variables, procedures, iteration, and on and on. At ITA, when we hire new hackers who don't know Lisp, we just give them a copy of Peter Seibel's excellent book "Practical Common Lisp", and they quickly pick it up.

Common Lisp, as it is used in real-world practice, isn't nearly as strange as many people think. (Yes, we have a job to do to explain that to the world.) It's not a new paradigm the way Prolog or Haskell are. You can get started, doing the kinds of things you do in Java, very quickly. Now, learning all the more-powerful stuff, like Lisp macros and how to effectively and tastefully use them, and more advanced object-oriented features of CLOS, does take some time to learn. But, then, learning all the useful stuff in the Java libraries takes time to learn, too, even if you're the best C programmer in the world.


Absolutely correct.

Over on the Lisp lists I sometimes make the point that Lisp failed precisely because it was so powerful that one person could be productive enough to get useful work done by themselves. As a result, Lisp tended (and maybe still tends) to attract people who don't work well with others, whereas if you program in C you have no choice but to work as part of a team if you want to get anything done at all. At the end of the day teamwork wins, even when hobbled by inferior tools.


That's an amusing idea, but it is not the case. I used Lisp extensively at the MIT AI Lab, at Symbolics (I was a founder), and now at ITA Software. The people worked together as teams very well in all places. The fact that you're using Lisp does not change the need to work together on design, conventions, architecture, etc, not to mention code reviews. The only top-grade hacker I ever worked with who could not work with others was using C++ (not that I think it matters).


"I used Lisp extensively at the MIT AI Lab, at Symbolics (I was a founder), and now at ITA Software. The people worked together as teams very well in all places."

That probably tells more about the people and environment of MIT AI Lab, Symbolics and ITA Software than the team dynamics of Lisp.


I don't claim this to be a hard and fast rule. It is obviously not impossible for Lispers to work together. My theory is just that Lisp tends to attract non-team-players more than C does, and the macro effect of this is that in the main C wins. There will, of course, be exceptions.


But But But.... multiple dispatch or multimethods will save us all!

Ahem.... right. Your response sounds more complete than mine below. :)


Programming is hard. Learning to do it well requires above average logical thinking and a significant amount of study. Good use of OO, FP, pointers, recursion, data structures and concurrency are all hard. Most people here will have limited experience with something on that list and will likely consider it hard.

The idea that programming is not hard or that X will save us from the fact that it's hard, where X is OO, FP, structured programming, DSLs or even Lisp is an insult to the trade. Worse, it applies pressure to stop advancing - after all, X is going to make this easy any day now.


Agreed with everyone. Programming is hard. Computers are hard. Just ask my nana.

I suppose I should revise my question a bit. What I really meant to understand was what was it is about OO that makes it a focus of conversation as a hard technique, when compared to all these other programming techniques you hear little no arguments about.

For all its usefulness, you don't see too many blog posts vehemently defending or refuting recursion. It's simply another way to write some code. You don't hear joel or jeff going on about whether or not data structures are useful or on the whole a degradation to programming communities. These are all just different techniques used to accomplish the same thing; in some cases, those techniques make achieving a goal easier or the code more elegant.

I do hear these things about OO programming, and yet, OO is just another one of these techniques! So, my question now becomes - not what is so hard about OO - but instead - what is so arguable about OO, that people feel the need to call it hard, useless, or revolutionary, while leaving the other techniques like poor recursion out in the cold?


recursion is not as overhyped and widespread as OO is.


i think you answered it yourself. it is just another tool, and it's ridiculously overemphasized


Programming is easy. But most people are not interested in learning it because they have other things they would rather do. But of those interested in learning it, they come forward quite quickly. As opposed to those trying to learn physics or math, both of which are hard subjects.


> Programming is easy.

"Green field" programming is easy. Hell is other people's code.


Programming is neither harder nor easier than mathematics: like mathematics, the difficulty is in the problems attacked. In both, the big advances come from asking new questions, and finding the right abstraction to deal with the new question. (Many people would say that programming <em>is</em> mathematics, albeit a new and in many ways distinct branch of mathematics).


Well, that is true, but you are forgetting that the "other things" are what the program is for. Physicists may write dodgy FORTRAN but that's fine, because the programs are only a side-effect of doing their real work. I would go so far as to say most of the people writing code (if not most of the code that exists) was written by people who aren't programmers.


There's nothing intrinsically wrong with OO programming. I find that data tends to map very naturaly on to objects. However, the biggest problem I've seen with OO is that people are tempted to build over-engineered solutions to simple problems. Java's EJB 2.0 spec is a great of example of OO design gone wrong (long story short, you have to implement an obscene amount of boilerplate code to get even simple things done). Anything that basically requires you to have an IDE to write your code for you to get off the ground is extremely counter-productive.

I think Java is particularly bad in this department, since I think it often really encourages with the "Let the IDE handle it for you" mentality.


I was one of the designers at BEA for four years. (BEA made WebLogic, a leading J2EE server.) Indeed, J2EE requires a lot of annoying boilerplate. Yes, you have to let the IDE handle it for you, which is fine for writing code but not so great for reading (someone else's, or even your own) code.

I will assert, though I can't demonstrate it, that had this been done in Lisp, the boilerplate problem could have been substantially alleviated by using Lisp macros. One piece of evidence: J2EE has improved this problem substantially by using Java's relatively-new "annotation" feature, which, broadly speaking, has some of the same mojo as Lisp macros, when used properly.

On the other hand, you can do a lot better even in Java. Look at Spring and the other new post-J2EE frameworks. The experience with J2EE has led to learning and improvement in these new "dependency-injection" frameworks, which is definitely a good thing for the Java world.


WOuld this be another way of thinking of it:

Lisp Macros are boilerplate without the duplication. (And so without the drawbacks)


Yes, as long as you keep in mind that macros can be much more than that.


Yeah, but "let the IDE handle it" is a perfectly valid way to code.

When you want to run water to your house, you don't hire someone to design new pipes for you; you use the ones pipe companies kick out. The boilerplate is just plumbing your IDE manufactures for you. The only difference between it and the magic that happens in dynamic languages with stuff like RoR is that boilerplate code is explicit rather than implicit. I prefer the former, truthfully, so I don't have to understand the whole stack to work on a small piece of the code.


Good abstractions don't require you to understand the whole stack. Effective use of a function, for example requires you to know:

* What the valid arguments are

* What will be returned

* What, if any side effects calling the function will cause

* (sometimes) What resources the function will utilize

Use of most other data types is simpler. Your framework or library should give you this information in the documentation. If your development environment is reasonably friendly, it will give you the arguments, and in many cases, you'll be able to guess the rest of it based on the function name.

If you have to RTFS to figure out how to use your libraries, something is wrong. Institutionalizing that by always putting the source right in front of you is a Bad Thing.


I'm not arguing against abstraction. I'm arguing that explicitly wiring between components is better than implicit.

To make this concrete, take the example of implementing getters for a class in Python. If my class has four fields

  class GetterDone:
    a = 1
    b = 2
    c = 3
    d = 4
I can do something like this inside the class to implement getters:

    def get_a(self):
       return self.a
    def get_b(self):
       return self.b
    def get_c(self):
       return self.c
    def get_d(self):
       return self.d
Or I can do something like this:

    def __getattr__(self, attr):
      if attr.startswith("get_"):
        return lambda: getattr(self, attr[4:])
    raise AttributeError
The latter does a good job of eliminating boilerplate, but at the cost of making it near impossible for me to find out how get_a() is implemented (assuming this was buried in a huge codebase).


Assuming you know the core Ruby library, would you call this explicit or implicit?

    class GetterDone
      attr_reader :a, :b, :c, :d
    end
Rather than being a runtime attr.startswith check, this will generate actual "return self.foo" methods in the class.

Think of this as generating code using a UI, except you record the minimal UI commands, rather than putting the full output into your source files. The commands themselves are still explicit, it's just that they contain only the interesting information with the minimum amount of boilerplate.


What you're talking about is essentially a syntax specifically for getters, I think (I don't know Ruby that well). I guess my general point about "too much magic" goes away when the magic is part of the language, just on semantics. That is, you don't know the language if you don't know what magic its syntax does.


I don't believe that a typical Java IDE provides a clean enough abstraction to make code generation a big benefit. When it comes down to it, you're still looking at a window full of all the code that the IDE spit out for you. Think of code generation via a compiler versus code generation via an IDE. The code that you see in the window of your IDE should be solely for human eyes. If a lot of it can be trivially automated, then it doesn't deserve space in that window.


I'm not 100% sure, but attr_reader looks like it's actually a message send. If it's not, then it could be implemented that way.


Yes, it is a plain old method being executed in the context of the current class. It creates new class methods which can be used to read, but not write, instance variables.

There are methods to create read-write and (I think? write-only) properties as well.


Or you can just realize that get_X is the Wrong Thing and you should just write GetterDoneInstance.X instead.


Great job ignoring the substance. I thought it'd be better to come up with an unrealistic but simple example rather than to paste in the source for Pylons...

Just play along and imagine a more complex world. If it makes you happier, pretend the getter is intended to return the string version of the numbers or some other operation that would make just using instance.a wrong.


But using instance.a is not wrong because you can override __getattr__. And if overriding __getattr__ is not the right thing then what you are writing is not a getter. Under no circumstances is it justified IMHO to write a get_X method. (And if your language doesn't let you override __getattr__ then you need a better language.) IMHO of course.


If the code is so simple that I can hit a button and have my IDE generate my code for me then I really question why I need to write the code in the first place.

A simple example of this is generating getters/setters. Autogenerating the code works great 99% percent of the time. The problem comes in for that 1% case where a non-templated getter or setter is needed. That special case is easy to lose track of in the forest of automatically generated code. Groovy solves this problem by automatically generating all of the getters/setters for you, but if you need to you can still override the automatic definition with a specific implementation.

I won't argue you with that too much magic can be a bad thing. However, for the code I typically generate with my IDE I really think I'm wasting time and bloating my code.


Exactly. If your IDE can just generate it, then why is it needed at all? And, as I said, it makes the code harder to read. You have to look over all the boilerplate and see if it's all exactly the usual boilerplate, or whether one or two methods in there were manually improved. If the IDE has to generate it, it means that the programming language is expressing your wishes at too low a level of abstraction, rather than more-directly reflecting what you were trying to say in the first place. So you have hit the nail precisely on the head.


The big problem with OO is that it is a tool, not a religion. A lot of schools teach it as a religion. There is nothing special about polymorphism or inheritance or encapsulated data. There is no purpose in having any of these things in your code. What does have purpose is to create easily reusable code that can be cleanly changed when requirements changed. The tools of OO (polymorphism, inheritance, encapsulated data et al) is one way to achieve this. But while it is possible to use OO to achieve this kind of code, using OO does not guarantee it.

I think the problem is further exacerbated by calling some languages "OOP" and others functional, or procedural, or whatever. Fundentally a programming language is none of these things. It is possible to program in a procedural style in java (make everything static or singleton) and it is possible to program functionally (but the syntax is very cumbersome). Similarly it is possible to program procedurally in lisp (do, and, etc) and CLOS. Some languages are more enjoyable to program in one style over another but if, like a lot of career programmers, you think programming in java means you are writing good OO code and because you don't care about the quality of the code you write, you just want to do a good enough job not to get fired, and because you know someone else is probably going to have to maintain it anyways, you write the code in the easiest most quickly completed way instead of the best overall way when considering debugging and maintainence times.


OO may not be inherently hard, but it is often used in overly complex ways, which makes it seem hard.

The OO patterns movement probably had something to do with it. It's easy sometimes to get the impression that with OO, you have to learn a bunch of weird patterns to produce good code, but with FP you don't have this conceptual overhead -- you just write the solution that feels the most compact and natural, almost in a mathematical sense.

Where OO does usually make life harder is in concurrent programming. Because objects have mutable state and hidden fields, it's difficult and often dangerous to send (and receive) them between processes. In Erlang, for example, all data is immutable and because it's made of simple (lists/tuples of) primitives, it's very easy to send it as messages (to processes on the local and/or remote VM) and to pattern match against it on the receiving end. This makes concurrent and distributed programming probably as easy as it can be.

I also find code written in function languages easier to read and debug because of data immutability. In code written in functional style, you know exactly where every variable is bound, so it's easy to track down bugs that cause it to have the wrong value. There's no mystery as to where a variable's data may be modified, which gives you confidence in the correctness of your solution.

I don't mean to suggest that FP is perfect and OO is broken. Sometimes when I write FP code I wish I could grab one or two concepts from OO. OO code can also be quite elegant as well.


Nothing. It's simply open-ended enough to argue about indefinitely. The mistake you're making is taking the arguments seriously.


Here's the fundamental problem with OO: not everything is an object.

Some problems decompose naturally into functions.

Some problems decompose naturally into procedures.

Some problems decompose naturally into objects.

OO has extremely poor fit with functional programming at a fairly deep theoretical level. Procedural programming is often very useful for doing synthesis on a data set. Object programming is often very useful for people doing stuff to their virtual "things". The problem is when languages anchor to one abstraction or the other at a fundamental level (ruby) programmers tend to think they have to follow along.

In truth, it's about finding an abstraction (or set of abstractions) that fit the problem at hand. A lot of what makes OO hard is cases where one really needs to break down and say "but this bit? this bit is procedural" and having the experience to be confident of dropping the OO approach at those points.


Yes. That's why you want a multi-paradigm language, in which all of those methods of writing code are available. The trick is to figure out how to provide all of them in a way that is well-integrated. I feel that Common Lisp does this very well. For example, when you want a method to get called, you don't use a special "send message" operator; you use a generic function. It looks just like a function call, to the caller; the fact that this call is doing a method dispatch is part of its internal implementation, rather than part of its externally-visible contract.

One reason Lisp has survived so long because it is capable of absorbing so many of the good new programming ideas as they come along. (No, it's not perfect this way, but it's pretty darned good.)


What about some sort of clean meta-language framework? Lisp can claim to be one of these -- it's the only language that is it's own Abstract Syntax Tree. Smalltalk goes a certain distance in that direction.

I think there is a great need for a multi-language framework, because different languages have such disparate power depending on what you are working on. I keep on thinking back to Rob Pike's Google Tech Talk on Newsqueak. He spends 6 months developing a language that does concurrency at a high level. After that, he writes a windowing GUI system in two hours.

What we need is the ability to support disparate language semantics and glue them together easily. It would be great to be able to model your business logic in Smalltalk, but write the GUI in something like Newsqueak.

Maybe Richard Stallman's original idea with GUILE was on the right track?


A big problem with OO is that most GUI interfaces are not a good match. It's quite natural for people with a GUI builder to start writing a program from the GUI down. Of course, since people tend to like attaching ideas to things that are more concrete, like the window they've just made appear. This tends to produce bad designs from the OO standpoint, however.

On the other hand, if people start writing up and passing around CRC cards, they tend to produce better designs. My conclusion is that GUIs have too much baggage from documents. In some important way, they are much worse than 3x5 cards that you can pass around. Once we figure this out, some aspects of programming will improve.

It will probably always be hard.


Even for GUI I have a hard time seeing why it would not be useful. In the typical event based environment (windows, frames, list boxes, whatever), how would you implement it without objects?

I am probably spoiled because I did it for too long, but I find it really hard these days to get by without objects.

I took up Scheme programming again, and I like it. I tried to keep it simple and avoid objects, but I find myself missing them all the time. Writing complex getters and setters with car and cad into deeply nested lists does NOT seem simpler.

I wish some of the functional programming gurus would write tutorials on non-object-oriented programming, instead of ranting against OO programming.


I think it's generally a mistake to use list structures with position-based accessors when what you really want is a struct or a dictionary of some sort. Aside from sane getters and setters, what object features are you missing? SRFI-1 gives you alists. It's pretty trivial to write a macro that lets you make constructors with generated accessors. It's not much harder to add inheritance to that. Generic functions and multiple dispatch can be implemented using a globle table of closures. Hey... this is starting to sound a bit like CLOS, isn't it?


Inheritance is not important, but the organisation of data. What if you have a GUI with say a tree view element, a list view element, and a text element. How would you create that in an non-OO way? I suppose you could write

var tableData = ... var listData = ... var textData = ...

and then add callback methods (all in the same source file?). But that runs out of hand real quick. It seems much nicer and cleaner to have a treeView object that knows it's own data and callback methods.

What if you have nested elements, like a window with those three elements, or something else. How would you hand that data around?

I have seen the make-struct macro of MzScheme, but once you start using inheritance with that, it starts getting really ugly, imo.


"Inheritance is not important, but the organisation of data"

Good point. I missed this when I first left Java, because in Java you get these nice classes. Over time I've found I use class structures a lot less, or in some situations I'll hack them. The other day I was writing a tree builder where in java I would have had a different class for "node", "attribute", etc, all extending from element. In my python impl I just had a class "element" with a string field "e_type" which was "attribute", "node", etc and then I stuck other stuff into a dictionary in that object as I needed it. Over time I've found that the extreme brevity improvements combined with blocks of documentation to describe the purpose of a grouping of code more than make up for the loss of code-based structure.

Again on the topic of organisation of data - in databases you get a lot of automatic documentation because the schema abstraction we're used to is so widespread and accepted. This is one of the reasons I love to stick with relational databases even though I know that there are good arguments, particularly in FP communities, advocating more practical forms of datastore.


It's true that when people learn object-oriented programming, and they learn about inheritance, they tend to think that inheritance is something you should be using heavily all over the place. It takes some experience to learn where inheritance is proper and where there are better ways of doing things, such as delegation.

I don't think OOP is that hard to learn, but I do agree that learning to use it very effectively in the best and most tasteful way takes time and experience. But that's true of so many aspects of software design; I don't think OOP is all that different in this regard.


One word: indirection.

With each level of indirection, you get one more level of sophistication, but at the cost of one more level of complexity. OO has tons of indirection (+cough+ polymorphism +cough+).

Back in the old C++ days, before decent IDE's, I remember tracking through multiple code files in several different directories, just to figure out if an add operator had been overloaded - and if so, how. Man, was that a complex pain!

Lately, with Java, I've found myself in the same situation, but with XML config files. Some Java developers just love XML config files and often use them to direct reflective code execution (dynamic language envy). Needless to say, my IDE's are failing me again.


I think that like any programming technique OO has it's place, and certainly not useful in many circumstances.

Absolutely correct.

and certainly not useful in many circumstances

This is what, infuriatingly, well over 90% of CS graduates are missing.


Yes try explaining to "graduate trainee" that a table is not a class and a row is not an object.


In my university training at least, we did have a thorough understanding of that distinction. We started with the relational algebra and went from there to actual database implementation.


It may be simplistic, but I find people try to model the problem in oo terms, and when designing the code they keep the same model. Something like if there are persons and cars you make two classes, Person and Car, and use them in code almost unchanged, but they are _problem_ entities, not necessarily code entities. In code it may be simpler to just use maps of some sort.


My bigger programs tend to be about 60% procedural, 20% OO, 20% FP. The highest level "business logic" is procedural - global functions, global variables. Some lower level supporting code is OO, for polymorphism (make certain kinds of objects interchangeable) and encapsulation (centralized bookkeeping and cleanup for clumps of variables). Other lower level bits are functional (systematically build up side-effect-free helper functions from smaller ones) or a mix. I never use inheritance, design patterns and all that gook.


I guess I lucked out as my introduction to OO was via the smalltalk course at the OU(I say lucked out because it was the last year the course was taught; subsequently being replaced by java.....)


My first OO language was Java; I learned it in college. After learning Java I thought I'd mastered OO.

Last year I learned Lisp, now I understand that I've only now mastered OO and that Java has some major limits (such as single-dispatch).

There is a real problem in that different people (e.g. the me from college vs. the me from now) have very different and incompatible ideas of what OO programming means.


I guess so, but actually I don't think that Java and CLOS have such a different idea of what OOP means. They just have different specific features. CLOS has multimethods and multiple inheritance; Java has explicit interfaces.

Frankly, multiple inheritance, and multimethods, are nice tools when you need them, but I'd say that most of the time, you don't. (Let me clarify that. Doing what Java does with multiple inheritance of interfaces IS very important, and it would be very bad to leave that out. What Java cannot do, namely multiple inheritance of implementation, is much less crucial. And you can very often get around it by using delegation. See "The Treaty of Orlando" which asserts that multiple inheritance and delegation have the same inherent power. (This was formulated at an OOPSLA conference in the 80's; Henry Lieberman was one of the main authors.)

Anyway, my point is that Java does indeed have limitations, but the fundamental concepts are very similar to those in CLOS.


Absolutely. The only real difference I could see is that object methods belong to classes in java, but methods happen to possibly work with objects in CLOS.

I prefer CLOS over Java because of the list of features, but the point I was trying to make is that different people (and even the same person at different times) have different ideas of what OOP is. This leads to funky designs and all kinds of messes.


That's only a problem if you think the terminology matters. It doesn't. What matters are the concepts. It's much easier to map the terminology onto the concepts than to try to go the other way around.

For example, it's much easier to go from a concept like "I can use the SAME NAME for DIFFERENT FUNCTIONS in the SAME PROGRAM, and there are various ways to map useful semantics onto the resulting code" to "If I resolve the ambiguity at run-time then it's POLYMORPHISM and if I do it at compile time then it's OPERATOR OVERLOADING" than it is to start with the terms POLYMORPHISM and OPERATOR OVERLOADING and try to learn what they mean and what they are good for.


I think the problem is more along the lines of thinking that OOP means one specific subset of a long list of features. I agree that it doesn't matter much what the feature is called in any given implementation. PG has published such a list by Jonathan Rees here: http://paulgraham.com/reesoo.html


OO is all goodness, just like FP. But the way it is implemented by some people can make it impossible to easily understand or modify -- just like FP.

OO is simply organizing your code before/as you write it. Combined with UML, it can be a kick-ass way of describing your general strategy for dealing with complex problems. Or it can take something simple and make it into a monster -- it's up to you.

Another way of looking at OO is that you are building your own language as you go along by starting with nouns (types) and adding all sorts of verb-clauses to hook them together. You can do this in a super-cool, easy-to-understand way, or you can get a bit carried away and try to recreate the dictionary when you only need a few nouns and a few verbs.

OO's goal is not tight, beautiful, concurrent code. You want to feel like Picasso or Spock, go write FP.

For all of those reasons, OO just isn't as sexy as FP. You're not writing something that scales to a zillion users right out of the bag. You're not doing a lot of meta-programming, recursion, lambda calculus and such. You're not writing anything sneaky, clever, or bound to impress the other nerds. Everything just looks plain Jane. Add to that all of the examples of bad pattern usage and other atrocities in the OO world and I can easily see why other ways of doing things can seem more attractive.


Wrapping your mind around OO (simple or bad or good or perfect as it might be) may be much harder than create some home-grown, intuitive dynamic dispatching solution.

The only difficulty with the above approach is team working. In a team, you have to find common grounds to base your thinking on.

So, I never ever use OO, but maybe the only real reason is: I don't need to work in a team...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: