Hacker News new | past | comments | ask | show | jobs | submit login
Object Oriented Programming is Inherently Harmful (cat-v.org)
149 points by idoco on Nov 30, 2014 | hide | past | web | favorite | 187 comments

At some point I realized I don't really need OOP itself, but I like a couple of bits it provides. Mostly:

- The syntax. I like to be able to say thing.doSomething(). It doesn't always make sense, but sometimes subject-verb syntax is more natural than a function call.

- Polymorphic dispatch (is that the term?) to replace if blocks. Instead of `if (thing is Car) thing.drive() else if (thing is Boat) thing.swim()` its just thing.move(). Pattern matching in functional languages solves this in a different way.

- Interfaces are nice.

I guess I could be happy in a language that just has structs and functions, and some help in form of pattern matching, multimethods and so on.


As in any situation, to understand what someone is saying, you have to know who they are talking to.

People who hate OO always refer to overly bloated systems that treat OO as a philosophical foundation, something to be religiously adhered to. I'm sure those systems exist and I'm sure that they're horrible. But, y'know, it's just awfully nice to have an easy way to define structures and functions that operate on them, and the ability to use some structures interchangeably with other, related ones.

Objects or Haskell-like records, I'm happy with either. I like multiple inheritance but mixins are pretty neat too. Encapsulation is a good thing, but I could probably learn to live with any kind of decent modularity and separation of concerns. Et cetera.

Monads, modules, and type classes (the bare essentials) make me never want to go back to Object Oriented Programming. That's not to mention all of the really clean and elegant software now blossoming in the Haskell community (Lens, Applicatives, Monoids, GADTS, pipes, etc...)

So far, it seems that a well-designed OO language can get you closer to the best of both worlds than if you start from ML-style functional languages.

Consider that C# let's you have all three of your essentials:

It doesn't have higher-kinded types, but it does have monad comprehensions (LINQ + SelectMany)… the special syntax is nice but not necessary.

It has type classes in the form of implicit conversions.

It has modules in the form of static classes.

C# even lets you do Smalltalk-style OO without too much fuss.

The biggest problem with C# and pethaps OO in general is the lack of intellectual curiosity.

"It has type classes in the form of implicit conversions."

Could you expand on this?

Agreed. What I see as the core principles of OO (ad-hoc polymorphic dispatch and abstract data types) are present, useful and important in the FP world too. OO helped to bring attention to these ideas even if it didn't capture them in their purest and most general form. So I think we should be careful not the throw the baby out with the bathwater when criticising OO.

> Pattern matching in functional languages solves this [polymorphic dispatch] in a different way.

Pattern-matching doesn't give you late binding / ad-hoc polymorphism though, so while it's useful to replace static if statements or case statements, it's not a full replacement for OO's polymorphic dispatch. For that you need things like typeclasses in Haskell or multimethods in lisps.

Wadler's famous expression problem is quite relevant here: http://homepages.inf.ed.ac.uk/wadler/papers/expression/expre...

Logically, closed extension and open extension are different things. OO conflates them together for no reason other than having the polymorphic dispatch hammer. FP equivalents for polymorphic dispatch are pattern matching for closed extension and plain old first class functions for open extension.

> I guess I could be happy in a language that just has structs and functions, and some help in form of pattern matching, multimethods and so on.

So pretty much Rust you mean?

Good points.

The D language has a way of providing your first point without resorting to objects. Basically, the compiler just transforms a statement like `str.length()` into `length(str)`.

The other two seem harder to do in a simple way, but I'm sure someone has come up with something.

Also: polymorphic dispatch is faster with OO (which is what Rust/Servo people have been figuring out recently).

Yes. If you can generate code on the fly [1], you can handle a subset of cases very efficiently.

[1]: JIT. Just in case some people confuse concepts, I'm not talking about JVM, virtual machines and such. Just plain runtime machine language generation.

The problem with JIT is that it's not very simple. Sure, you can write your own single-purpose runtime-code-generating routine. But the beauty of OO is that it's widely applicable. If you want to achieve comparable speeds with JIT, you pretty much need some sort of a virtual machine; in particular, simple method-at-a-time JITs are not very efficient; you need some sort of measuring or tracing, so that you know what code to compile and how to optimize it.

Sure high performance generic JITs are not simple.

NOTE: Following is meant for dynamically dispatched lightweight methods, like those that just read (or compare) or store (or modify) some simple field value or do other simple operations where dispatch and call frame setup costs dominate. In inner loops which are executed enough many times that the codegen overhead is less.

The rationale is that branches are very expensive on modern CPUs, especially indirect subroutine (= method or function) calls. Consistently 14-20 clock cycles lost in addition to whatever time call frame setup takes, pushing and popping registers from stack. If inner loop cost is otherwise just a cycle or two and the dispatched method is trivial, it's easy to see how branch mispredict and call setup costs can dominate. In some cases some unrolling and prefetching could be done in addition.

In this case I envision routines that fuse (inline) dynamically dispatched (function pointer, message passing, pattern recognition) method inside a loop. A compiler could generate the rules and runtime system can do the actual composition. The method would then be dynamically inlined within the loop. No call, no function pointer call (=vtable), no full pattern recognition, no message dispatch, etc. Just fused, inlined code.

Pattern match can't be completely resolved early? Just generate code inside the loop that does the rest of the work with tests and branches. Try the most likely case always first (may require some profiling). Same with other dynamic dispatch.

Many cases, but known exponential distribution of probabilities? Inline some most likely cases, dynamically dispatch the rest. Like before, most likely first and some profiling may be required.

This way very expensive (data dependent) indirect function call is avoided. Call frame setup and teardown is avoided. The savings can be an order of magnitude for short dispatched methods.

Definitely no complete virtual machine needed.

Note: Edited multiple times.

Hm... not sure if I agree (if I understand this correctly).

If in the inner loop, the same method is called (of the same class), this means that the target of the indirect function call is always the same, and I assume that the CPU can predict that. If not, the compiler can hoist the lookup out of the loop. In certain languages, e.g. in Objective C (where lookups are dynamic, and much more expensive than vtable approach), it's a common programming pattern to manually hoist the lookup out of the loop, get the method descriptor (i.e. the jump target) and call the descriptor in the loop, without going through the lookup again.

If the inner loop loops over objects of different class (i.e. you're looping over an array of objects, each can be a different sub-class, and you're calling a virtual method), I don't think there is any way to improve vtable based approach.

Well, last I checked, the cost for an indirect function call for simple operations was about 5-10x compared to inlining. Each function call constitutes of at least two branches - the call and return. Caller needs to preserve any registers that may be altered by the callee according to ABI conventions. Same for callee.

That is for correctly branch predicted calls.

Far worse for mispredicted calls.

> If the inner loop loops over objects of different class (i.e. you're looping over an array of objects, each can be a different sub-class, and you're calling a virtual method), I don't think there is any way to improve vtable based approach.

Say 90% of the objects are of certain same type and the code generator knows that. Codegen puts the test for this case as first one and inlines the code directly in the loop body. Then you can at least have those 5-10x savings in 90% of cases and have branch mispredicts for the rest of the time. That can still be a significant saving.

Not really. At best, dynamic dispatch might be faster with inheritance, but even that's kind of a stretch. Most of the slowness with using trait objects in Rust has little to do with the dispatch itself, but with casting between the types being relatively inefficient and/or unsafe (for a whole host of reasons which also affect other parts of the language).

In any case, dynamic dispatch, polymorphic or not, is considerably slower than static polymorphic dispatch, which Rust excels at. Servo's biggest performance wins over Gecko (outside of pervasive parallelism) have come by greatly reducing dynamic dispatch.

But, how often is it that dynamic dispatch can be optimized into static dispatch? I imagine it probably cannot be done with most of DOM handling... Isn't Rust's static polymorphic dispatch more-or-less equivalent to what you can do in C++ with templates?

For a long time there were Servo devs pushing for Rust to feature struct inheritance in order to safely and efficiently implement the DOM. Various proposals were written and prototypes implemented, but they all received criticism from the community (often of the form "this seems like a feature that solves a problem that only Servo will ever have") and no single proposal was a clear winner over the others. Today the Servo devs hack around the issue via `unsafe` blocks and transmuting things willy-nilly, but they're still dissatisfied with it.

I expect the issue to be revisited post-1.0. In the meantime, if you'd like to familiarize yourself with some of the proposals, here's a recent-ish summary and discussion: http://discuss.rust-lang.org/t/summary-of-efficient-inherita...

Right... I'm partially familiar with the development, but AFAIK, the basic goals are basically thin pointers and cheap dynamic dispatch using v-tables, not elimination of dynamic dispatch in favor of static dispatch.

My goal was to emphasize that you are correct that not all useful instances of dynamic dispatch can be reduced to static dispatch. With regard to the DOM specifically, however, there is some concern that the structure itself was designed and standardized such that it can only be implemented efficiently by assuming some sort of inheritance scheme, effectively baking the typical OO approach into the web standard. Opponents of inheritance being added to Rust have claimed that this is a unique case, and that such a feature would not pull its weight in any non-DOM scenarios (or worse, that it would be redundant with Rust's trait system and that you would fall into C++'s trap of nobody agreeing on which subsets of the language to use). Given the nebulousness of the proposals in this space, I have no dog in this fight at the moment.

My other goal was to show that Rust gives you enough tools to manually emulate an efficient inheritance-style dynamic dispatch scheme, even if it can't fully prove that your implementation is safe.

What does 'faster with OO' mean? Is there some implementation detail that is tied to, or heavily associated with, OO?

Method dispatch using a vtable (a pointer, embedded in the object, pointing to a record of methods) is way faster than practically any other form of dynamic dispatch (except calls of known functions, of course). To support it, you pretty much need some form of single inheritance/subtyping (C++ does support multiple inheritance, but it significantly complicates the implementation). Rust's single-dispatch type-classes and Go's interfaces are dispatched in a similar way, but the record of functions is passed along the pointer to object as a "fat pointer". Servo people figured out that that is too wasteful/slow compared to C++-style vtable approach.

I assume it's based on (this is a bit hand wavy) the fact that in an OO language, calling the method merely needs to resolve the address space of the method. With pattern matching, you first have to perform the pattern match to figure out what instruction to execute, and then that resolves to an address.

That is, because your data and function are bundled together, having the data means you also know what function to call. Decoupling them, as in a functional paradigm, and passing the data into a pattern match, you have to perform some logic upon the data to figure out what function to apply to it.

But, a series of pattern matching expressions can compile down to a bunch of sealed/final classes that descend from a common base class. In that case, it would still end up using a vtable[1]. I think this is how it works in F# and Scala.

[1] I think that the JITs for the CLR and JVM can do better than vtables in the cases of sealed/final classes.

I am sure it doesn't work with way in either scala or F#. You can use classes to represent structure, but the matching is still extrinsic and can't use a table given the flexibility of both languages (you don't just match on class).

Just curious… why not? Isn't a class definition just a type constructor? For instance, if I have a class, Foo, with three types of fields: int, bool, and string, isn't that the same as:

    Foo of int * bool * string
I don't have a full PC handy, but, while learning F# a few years ago, I remember using Reflector to analyze an assembly generated by FSC.exe and the pattern match turned into a class hierarchy.

It may not work for all kinds of pattern matching, but that seems like the most straightforward way of handling sum types. What am I missing?

I doubt that was what was going on, but I can't find any documentation right now on how F# optimizes case matches. If it was doing that, it would have to apply those classes to all instances of the patterns, which seems unlikely in the face of separate compilation as "Foo of int * bool * string" could be constructed in a far away place. Well, if Foo was a local type that wouldn't escape, perhaps they could optimize it in the module, but it still seems unlikely.

OO languages have trouble optimizing constructs such as interfaces (and other forms of multiple inheritance) because the v-table is no longer very linear. Now throw in arbitrary predicates and you are pretty much back to linear search. Probably the state of the art can be found in GHC, but even they probably fall back to linear in a lot of cases.

Looks like you're correct; I just examined some F# code in LINQPad 4, and the IL has a bunch of `isinst` instructions to check the type at runtime, and no `callvirts` to the one pattern matching expression that I created.


> Polymorphic dispatch (is that the term?) to replace if blocks.

It looks like you're talking about Predicate Dispatch. http://c2.com/cgi-bin/wiki?PredicateDispatching

Polymorphic dispatch is the specific reason for polymorphism in Object-Oriented Programming. It is more generically known as "dynamic dispatch": http://en.wikipedia.org/wiki/Dynamic_dispatch

You'll encounter the term in Bertrand Meyer's Object-Oriented Software Construction for example: http://www.goodreads.com/book/show/946106.Object_Oriented_So...

Dynamic dispatch is a special case of predicate dispatch. In particular, captainmuon pointed out that:

>Pattern matching in functional languages solves this in a different way.

captainmuon wanted a more flexible way of dispatching on types similar to what can be done with Pattern matching (which is also a special case of predicate dispatch).

In the post I linked to above this example is given:

Basically, instead of basing the dispatch on an "is_a" check, you check whether a general predicate is valid on the argument. So, in imaginary syntax instead of writing

  int foo( int a, int b):
  if a > 0 
	return a
	return b-a
you'd write:

  int foo ( gt_zero? a, int b): return a
  int foo ( int a, int b): return b

I think you'd be quite happy in Rust, if you substitute trait methods for multimethods too.

Regarding your second point, what would be the functional equivalent?

He means pattern matching functions, which are idiomatic in Erlang but "dirty" in Haskell (where you would want to solve it better with a type class, or something else more general).


    myfun({duck, "My Duck's Name"}) ->
    myfun({dog,  "My Dog's Name"})  ->
    myfun({cat,  "My Cat's Name"})  ->
Erlang also has arity matching (this function is different than the one above it):

    myfun({cat,  "My Cat's Name"}, cage) ->
Case statements are one way of doing that but those are considered a messy style in Erlang because they end up nesting so deeply - keeping it all out in small little functions and using pattern and arity matching is the "right" way.

You can do the same (except for arity matching) in Haskell with function arg pattern matching and Haskell also has some very nice projection tools for it too:


    newtype Name = Name { unwrapName :: String }
    data Animal = Cat | Dog | Duck deriving (Eq, Show, Ord)

    myfun :: Maybe Animal -> Name -> String
    myfun (Just a) (unwrapName -> name) = (show a) ++ "'s name is: " ++ name
    myfun Nothing _                     = "No animal given!"

    > let n = Name "Fido"
    > myfun (Maybe Cat) n
    > "Cat's name is: Fido"
    > myfun Nothing n
    > "No animal given!"
There are many problems with that function and I could probably eliminate the multiple function clauses and reduce it to one clause by getting rid of the pattern matching and using the maybe function:

    myfun :: Maybe Animal -> Name -> String
    myfun a (unwrapName -> n) = maybe default formatAnimal a
        default = "No animal given!"
        formatAnimal t = printf "%s's name is: %s" (show t) n
That would be the more idiomatic way I would do it - pattern matching isn't "bad" in Haskell and you'll see it used quite often in utilities and libraries. It's considered good form to move stuff like that into a utility and give it a clear name (like the maybe function above that takes our formatAnimal function) then use that in your application code so it becomes obvious what's going on.

Pattern matching, deeply nested ifs and cases, etc... are somewhat hard to parse for the eye.

What is concept is the:

    newtype Name = Name { unwrapName :: String }
   (unwrapName -> name)
I'm not familiar with this, are you pattern matching on a function type?

It's the View Patterns extension: https://ghc.haskell.org/trac/ghc/wiki/ViewPatterns

Just getting into Erlang right now, this is very helpful.

Infix notation in Haskell is close, except that most of the time if verb syntax is convenient you're expecting side-effects.

I was thinking of something like in F# (disclaimer: I've never used it and copied this from MSDN):

    type Shape =
    | Rectangle of height : float * width : float
    | Circle of radius : float

    let matchShape shape =
        match shape with
        | Rectangle(height = h) -> printfn "Rectangle with length %f" h
        | Circle(r) -> printfn "Circle with radius %f" r
Admittedly, this is not too far from a simple `if`, but I believe the language ensures that you handle all cases.

The difference to OOP is that the code is in one place, with the function, instead of distributed, with all objects. It makes it easier to add functions, but harder to add new types. Often, this is the better option, but if you are writing truely extensible code (a library that gets passed custom objects, or an application with add-ins) then you want to go the OOP way.

You would pass in a function or datatype containing a function, which is exactly what the OO code does. No language features beyond first class functions necessary.

I'm a Java programmer so work with OOP every day. I think it has it's pro's and con's like everything else.

My pet hate is how dogmatic some people get to the point where they get angry over the use of if/switch statements. "It's not good OOP, use polymorphism", well the goal isn't to pass an OOP exam, but rather to write clean maintainable code where possible. If an if/switch saves me writing 6 classes with 3 lines in each of them, then that's what I'll do.

My pet peeve is how these brilliant architects end up writing so much code that does nothing. These 1 line functions just end up calling another 1 line function. At some point someone actually has to write the code. You know, that thing that actually does the work.

I always figured OOP programmers actually hate coding because they try so hard to avoid actually writing code that does work.

I have seen architecture skyscrapers in Assembly, Clipper, C, C++, you name it.

It is something that is part of enterprise culture, it has nothing to do with programming languages.

I would consider myself lucky that someone with the word 'architect' in their job title on my team wrote code that did nothing. Careful what you wish for.

There are Marchitects, Tarchitects, and Farchitects (Market-oriented, Technology-oriented, and umm... "freaking" architects).

Sounds like you have a team of Farchitects.

I have the title "architect", and I write code. It's usually code that lets the other thirty programmers write one or two lines of code instead of 200 lines, 30 times, in 17 different ways.

Because it is framework code, I'm often providing them the Lego blocks to build the rest of their system in... With unit tests... and examples... and at least one real implementation... and a Wiki page explaining it... and a Powerpoint session teaching them how I expect them to use it.

I do get the occasional grumble ("Uhh... Can't I just use Doohickey V directly?" "Sure, take a look at the interface, and don't forget the externalized configuration." "Oh.")

Just sayin'. There are architects and there are Architects.

Yeah, it's often taken too far. I was on a project where they created a DTO, DAO, UI, and some other bullshit class for each screen/idea. Most had 10 lines or less and there were at least a few hundred projects. I can't even remember.

You'll still have the DAO and the DTOs in a functional language - they're types which represent concepts outside of the bounds of your system (well, process).

Consider a simple information system consisting of a database and a web service publishing a REST/JSON API. The web service will need a definition of the interface between itself and the database (the DAO) and a definition of the interface between itself and the outside world (the DTO).

Defining both interfaces explicitly allows you to control change and accessibility of data. I.e. you don't have to expose your database to consumers of the web service.

I'd love it if my OO language of choice (C#) could add a special non-polymorphic type to represent these concepts, because then you could do cool stuff like define the DTO in terms of the DAO. "The DTO is all properties of the DAO except propertyX and propertyY".

Ironically, they're guilty of an anti-pattern. The point of all these patterns is to only use them when they actually solve a code smell. If you introduce them early, then you have Needless Complexity, as they like to say.

Java is arguably not an OOP language. It's a class-oriented language. It became OO only since 1.8 JDK.

Guest post from the future - here's a quote from a similar page on cat-v.org in 2024:

"FO is the “structured programming” snake oil of the 10s. Useful at times, but hardly the “end all” programing paradigm some like to make out of it.

And, at least in it’s most popular forms, it’s can be extremely harmful and dramatically increase complexity."


It's much easier to be a web contrarian than to build widely-used solutions that don't have limitations, drawbacks, inconsistencies, or other issues.

It is a list of quotes from people who have in fact built such "widely-used solutions". How is that being a "web contrarian"? Do you really think Rob Pike has never written any code? How did go happen then? You don't think Carmack has maybe a little bit of experience with large, complex software projects?

What is FO?

Function Oriented (~ OO: Object Oriented)

Procedural? That's been around for a really long time, and isn't a fad or being pushed by marketers like OO was.

I was briefly part of the "OOP is overrated, functional programming is in" camp. OOP is enterprise, functional programming is "simple" because it's just functions and data, you know, all the usual party lines.

Well, turns out CLOS is so good it's hard to stay away from it. You don't struggle to force something into an inappropriate object-oriented paradigm, rather, object-oriented solutions just flow naturally from the problem. And it's a pleasure to use.

I'm reasonably convinced that inheritance is a bad default/privileged operator for a language. Composition is a better default.

I often find myself suspecting that the hate OO gets is primarily a result of this early error.

The remainder is of course the way OO was promised to be the end-all of programming paradigms, which has not happened. Backlash is fully justified for that.

To the best of my knowledge, the cat-v/suckless crowd support neither of these paradigms.

Closure? Common Lisp Object System? CLOS network topology?

I really find Carmack and Armstrong's quotes very appropriate. The root problem seems to be trying to do eveything the OO way when all you need is a function. Java for e.g. is plagued by this problem. Why would I want to write a class/object when all i need is print "Hello World". That being said there are problem where it fits really nicely e.g. Implementing user roles.

> Why would I want to write a class/object when all i need is print "Hello World".

There is a huge gap between what OO could/was meant to be and how it is seen nowadays. I prefer thinking of OO as it's implemented in Smalltalk or Io instead of in C++ or Java.

To be more specific, in Smalltalk your hello world would look like this:

    Transcript show: 'Hello world'
and in Io:

    "Hello world" println
No class/object declaration in sight, right?

And then this:

> The root problem seems to be trying to do eveything the OO way when all you need is a function

is a wrong question altogether - there is nothing stopping you from writing a function in OO (other than broken and dumbed down implementations in major languages, that is). In Smalltalk:

    my_func := [ 'Look, I'm a function!' ].
    "There's this slightly unusual way of calling the function though:"
    my_func value. "returns 'Look, I'm a function!'"
similarly in Io:

    my_func := block( "Look, I'm a function!" )
    my_func call # same as above
So, to make my point clear: there is NOTHING in OOP itself which REQUIRES verbosity and over-abstraction. On the contrary: going "full OO" makes it easier to write short, readable, to-the-point code. It also makes it easy to use FP patterns should you want it.

What you're arguing against are the currently popular implementations of OO, which are just bad. And before I forget: Erlang (and I program in it quite a bit) is one of the best Object Oriented languages I worked with.

I'm not sure if you are trolling or not but erlang is not an object oriented language. Joe Armstrong said it quite clearly in the article where his quote was taken

> As Erlang became popular we were often asked “Is Erlang OO” - well, of course the true answer was “No of course not” - but we didn’t to say this out loud - so we invented a serious of ingenious ways of answering the question that were designed to give the impression that Erlang was (sort of) OO (If you waved your hands a lot) but not really (If you listened to what we actually said, and read the small print carefully).


I'm not trolling; of course whether Erlang is OOP depends on a definition of OOP you consider "real". If you think of OOP as Java-style programming, then of course, Erlang has very little in common with it.

On the other hand, if you go back to what Alan Kay had in mind when he invented OO, namely objects as "black boxes, similar to computers in miniature" and "message passing as the only way of doing something" then Erlang is very, very much OO. Its processes are objects, and sending (asynchronous!) messages is built into the language. Then you get encapsulation, data hiding and interfaces with module exports and behaviours. That doesn't stop Erlang from being FP, too.

Really, there are many different paradigms and many implementations of each one, it benefits no one to only consider one particular implementation as representative for a whole paradigm.

Footnote: Alan Kay post on a similar topic: http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-...

To paraphrase Robert Downey Jr., 'Never go "full OOP."'

I think the bigger problem has been decades of education that start with decomposing domain objects into OOP class hierarchies by common attributes.

Animal, Dog, English Bulldog, Herman.

Now we have significantly better ways of teaching OOP:

- favor composition over inheritance

- decompose and group behaviors

- Liskov substitution principle

Sadly we're dealing with mainstream languages that are barely OO like Java (which I've done 99% of my work in for the last thirteen years). But even if Eiffel had somehow taken off and stood in Java's place... Nope... can't finish that sentence. It couldn't have.

OO was originally created as a way to cleanly allocate memory on the heap. That's a fairly well solved problem now. The best parts of OO aside from heap allocation are encapsulation and polymorphic dispatch (two sides of the same coin). Closures and functions as fundamental units of decomposition handle that fairly well, especially when you couple that with type inferencing.

I'm not arguing that OO is bad. I'm arguing that it's no longer anywhere near as useful as it used to be, even when it was implemented and used properly.

> OO was originally created as a way to cleanly allocate memory on the heap.

That's the first time I hear something like this. Any source for this?

I can't find the reference now, but it was part of a history I'd read about the creation of Simula 67.

Given that the summaries you read on Wikipedia about it describe Simula 67 as a fulfillment of the need to create a better process description abstraction, I may simply be wrong.

Quick side question on this piece of Smalltalk:

    my_func := [ 'Look, I'm a function!' ].
How does the string parsing work here? I like it a lot

edit: welp, this [1] makes it sound like 'Look, I'm a function!' is a typo and should in fact be 'Look, I''m a function!' (Note the double simple quote `I''m`)

But the idea to not need to escape single quotes when surrounded by something else than whitespace is interesting.

[1] https://gist.github.com/sin3141592/602700#file-smalltalk-gra...

OMG, yeah, my bad - I tested the code with some other string in Pharo and then changed it when pasting it here. I should have known better and tested the exact code I was going to post :)

Agreed, but note that C++ doesn't require you to ever declare a class or an object. That's a Java thing you're complaining about.

C++ is equally "bad" in this regard - it doesn't force you to write classes, but when you don't you're using underlying primitives which are not first class objects. In both Smalltalk and Io the expressions I gave as examples produced real objects (BlockClosure and Block instances, respectively), while in C++ this:

    void my_func(){ std::cout << "I'm a function"; }
produces a simple pointer.

Of course, this is how it should be - it follows from C++ design goals and totally makes sense for a number of reasons; however, from the perspective of OOP, this makes C++ less "pure OO" than mentioned languages (and many others).

Agree. Almost always you have the option to place some functionality on the hierarchy somewhere, but actually composition would be better.

To make matters more difficult, you have to make these kind of decisions very early before you write your code. By the time you properly understand the problem, you will have invested in a particular solution. There is N>3 ways to do things and there is only one best way, so chances are you are doing it the wrong way. OOP is not bad, it's just often misused.

How often are you writing a program that only needs to print "Hello World".

Every time I need to put a program in discussion board comment to prove a vague point about program design.

Complaints agains oop often arrive from the fact that the programmer starts with an idea of the instance method "do()" and tries to build a class hierarchy under this idea. When the programmer fails, obviously the paradigm is wrong because the idea can't be, right?

As for functional vs oop, I am on the opinion that just because the haskell type system is awesome, and the java typesystem is traditionally unflexible, it doesn't mean functional > OOP. Language features like objects, static checking, first class functions are typically very orthogonal, hence you should be combining the features, so they match your domain.

I figure as code evolves, it generally takes 3-4 rewrites to get class hierarchies to a point where it matches your domain and more often than not the solution involves shallow trees and the use of traits or mixins which is something traditional oop languages don't all support well.

The only harmful paradigm is "write only programming", everything else is negotiable.

Functional vs OOP isn't really about the type system - Clojure is a functional language and it's dynamically typed.

The main value of functional programming is working with values and isolating mutation. When you deal with just immutable values your code maps to distributed systems naturally which is why functional programming is becoming more popular with the advent of cloud and distributed computing.

> When you deal with just immutable values your code maps to distributed systems naturally which is why functional programming is becoming more popular with the advent of cloud and distributed computing.

I work with hardcore distributed systems people (the ones that attend SOSP and OSDI). And the penetration of functional programming in distributed systems is about 0%. Sure, immutable state is easy to not share, but sharing of mutable state is inevitable, and you gotta deal with it, not try to wish it away.

The only people who seem to think functional programming is great for distributed systems seems to be people who don't really do distributed systems, or at least ones where scalability, performance, and fault tolerance are critical.

Shared mutable state tends to be only in a core data store or scheduler/thread manager, a small part of a distributed system.

Ya...no. Maybe toy distributed systems, but not the larger ones industry works with. Let's put it this way: if you have no shared distributed mutable state to worry about, it's not really a distributed system, it's probably doing parallel computation or something similar (like via map reduce).

And it supports CLOS like OOP!

>I am on the opinion that just because the haskell type system is awesome, and the java typesystem is traditionally unflexible, it doesn't mean functional > OOP

No, it certainly does not mean that. And there is a tendency to misattribute the benefits of FP to the type system and vice versa. So remove the type system from the equation. The fact that erlang is better than all the generic untyped OO languages suggests that yes, FP is better.

>I figure as code evolves, it generally takes 3-4 rewrites to get class hierarchies to a point where it matches your domain

That's not a good sales pitch for OOP.

> The fact that erlang is better than all the generic untyped OO languages suggests that yes, FP is better

What criteria are you using to determine this "fact"?

"Inherently harmful" is a bit vitriolic. If not a majority then a huge percentage of the world's systems are based around OOP and for the most part they function just fine. When people say these things I wonder what bubble they are living in where something is either the best thing ever or shit. The world doesn't work like that.

That said, I prefer a hybrid approach which is why I use Scala. It's not an either/or (or, if you prefer, an Option monad :-)

Sure, but sometimes strong opinions are necessary to make progress. If Dijkstra hadn't written "Go To Statement Considered Harmful", would we still be using them today? Probably not, but it probably helped bring quicker progress.

Just because you can build a house with a rock doesn't make it the best tool for the job, and it's in the interest of the carpenters to figure out what is the best tool for the job.

I often dislike C++ and Java (note: did not say object oriented programming), because the logic tends to be distributed in so many different files and locations. The abstractions, which should be beneficial, to decrease mental load, have the opposite effect over a longer time. To understand what just one overloaded method call does, I have to often read through a dozen of different files. Context sensitive IDEs don't make it much easier. Simply grepping files at command line is often the fastest way!

Combine that with object mutable state. And that with multi-threading (surprise state changes). Add exceptions on top (surprise hidden gotos, especially annoying in C++). The end result is often nearly impossible to fully understand.

However, this is not anti-OOP rant really. I recognize OOP has its uses.

The bigger issue is almost always when there's a discussion whether X is better than Y, some people seem to forget often both X and Y have their place in the toolbox. They're often complementary. When you like technology X, technology Y is not a threat to you, but an opportunity to learn something new. In the same way, criticism against X can be an opportunity to learn and improve. Not hostility against people who like or are used to technology X. No matter how much you like hammer doesn't make a saw a bad tool.

>To understand what just one overloaded method call does, I have to often read through a dozen of different files. Context sensitive IDEs don't make it much easier. Simply grepping files at command line is often the fastest way!

Right click -> Find all references / go to definition don't do that faster than typing out the grep command?

You said mutable state makes it harder to understand, and you said exceptions make it harder to understand because they're hidden gotos (which if statements and while loops are too), but didn't explain either. In my experience, exceptions have been a very simple, helpful, intuitive component of control flow.

> Right click -> Find all references doesn't do that faster than typing out the grep command?

Yes, IDE is not generally faster. Find all references is not faster for me in most real life scenarios. Typing grep command is just arrow up, ctrl-something to get the cursor to right position, type something, hit enter. Piped to less, I hit / to search within the results. Done right, results from grep appear pretty much instantly, no waiting. Some operating systems are much better for this technique than others, due to cost of dealing with small files.

> You said mutable state makes it harder to understand, and you said exceptions make it harder to understand because they're hidden gotos (which if statements and while loops are too), but didn't explain either. In my experience, exceptions have been a very simple, helpful, intuitive component of control flow.

Whenever you call some method (or overloaded operator), it can throw. The fact is completely hidden from the context in front of you in C++.

In Java, you do know what might throw, because you have to declare it. On the other hand, the cost is high when anything you call changes the exceptions it throws. Or more like, it's set in stone.

That still sounds like more work than the way I do the same process in Visual Studio, which uses the AST or something around that level to search the code instead of just a text search, so it doesn't find irrelevant results. I guess it depends on your environment.

It is better when it's specified exactly what exceptions can be thrown by a method, yes. One of the only things I prefer in Java over C#.

If only Visual Studio/C++ used AST! At least in Visual Studio 2012/C++, context sensitive search doesn't often work at all. Try to get context for something common like ".create". You get a lot of non-context hits, even thousands. Maybe I'm doing something wrong? Maybe this doesn't happen in VS2013? In most other IDEs context sensitivity is better, but I still fall back to command line in Linux. It's just so much faster (for me). I get what I want in about 0.1 to 3 seconds and can easily search near (grep -A and -B switches) these context hits with '/' in "less".

Git grep is great too.

Regardless, if it works for you, great. We should all use what works and disregard needless debate and politics.

I haven't used it for C++, only C# and VB.NET. It's integrated more with those, probably not with C++

One point I often see missed is that having a code base that supports grep style searching often involves a codebase that does not constantly reuse the same function names. With one that is really done this way, the same variable name appearing in multiple places will be informative, as well. Even if the types are different.

In my experience, VS2013 with peek-at-definition really is unmatched. I'm all for text searches (it's my go-to, rather than point-and-click files), but the contextual definition (and ability to edit) is stellar.

I used to grep everything, but now in emacs I have a "go to definition" for every language I could find it for bound to a hotkey.

In C++ at least there's not much stopping you from putting every single class and method in a single header file like an animal.

OOP and FP are not in opposition to each other. FP considers objects to be at different granularity than your typical OO language does. So the real problem stems from the fixed granularity of the objectionable (heh) of typical OO languages leads to mutable state, which leads to unnecessary complexity.

In the end, typical OO leads to type systems which lead to noun discrimination which are ontologies, which reflect world-views. You don't want to get any of that on you. The more successful type systems stop at grouping aggregatable functions. Look at Go. Or Clojure. Those type systems are as good as it gets to me but I've never written a line of Smalltalk or Haskel, so someone set me straight on that.

> OOP and FP are not in opposition to each other. FP considers objects to be at different granularity than your typical OO language does.

well FP purists would argue that objects(a unit that has properties and methods, and can pass messages to collaborators)have states.AFAIK "pure FP" is about getting rid of states,using immutable datastructures, and describing operations on stateless units,rather than writing imperative code.

I personally believe that being a FP purist or OOP purist doesnt make sense.Both paradigms can work together thus be orthogonal and not in opposition.

> AFAIK "pure FP" is about getting rid of states

Of course not. It's about making state be always explicit and about carefully controlling side-effects. Some state is going to be there always.

I agree in principle but your definition of an object is very limited, which is part of the problem. FP considers both a function and a closure are objects. Classic OO does not. This is part of what I mean about different levels of granularity.

A couple years ago at least, I know I had problems with writing type safe json handling code in Go. I believe my particular hang up was a place where I basically needed an enum.

In the quixotic hope of heading off a pointless argument: the opposite of OOP is not functional programming.

I guess the opposite of OOP would be C without structs. Which is a thoroughly horrible thought.

So... FORTRAN? :)

"... a computer model should be structured along three dimensions: data, functionality and events":


Y'know, I spent half a day today thinking about it, and your post gave me a nice, fresh angle at the problem. Thanks for that. So here's what I came up with (still work-in-progress):

The most simple program is easily done in assembly- or basic-style program - straight sequence of control instructions. The program starts with a set of data, constructs an intermediate data structure, a state graph, iterates on that graph in a simple sequence of commands, mutating it, collects output from the graph and then terminates.

When a program gets more complicated, we break sequence of commands into subsequences (subroutines/functions/etc). Each subroutine can be understood in isolation from others, which is a great win, but it still interacts directly with various pieces of the state graph, which limits our ability to reason about its effects.

Next we carve out individual pieces of the application state graph, and proclaim that each such piece can only be directly accessed by a small amount of code married to that piece of data. Such code and data married together are called an "object", and the entire approach is "object-oriented". Thus to reason about a subroutine we only need to look at its code in terms of its interaction with the "object" abstractions, rather than with the raw state graph data. Since abstractions are simpler than their implementations, it gives us extra ability to understand and thus create larger programs.

Implicit in the last two approaches is mutability - as the graph state changes, the same pieces of code continue to have access to it, and thus are expected to accomodate the drifting state. That is, the same chunks of code are expected to operate with a number of different states of the graph, corresponding to different stages in the lifecycle of the data.

An alternative way to cut down on complexity is "pure functional". In this approach each piece of code operates directly on the entire data graph, but only at a single stage of the lifecycle - it would iterate over an input readonly graph, then produce a new output readonly graph as a result, to be fed into the next stage of the pipeline.

So we have two dimensions - data and time. If we narrow down the data dimension, we end up with object-oriented approach. If we squish the time dimensions to a single point, we end up with pure-functional approach. The third dimensions - sequence of commands, is split up into smaller chunks in either of the two approaches.

Interesting thoughts. Now you set me off for at least half a day of thinking further. By the way, your sibling question already set me off studying reactive programming which I didn't know much about. It seems like reactive programming is lifting the time aspect of programming to a more prominent position. So functional-reactive coordinates functionality with timing. Will be interesting to see how the data dimension fits in then...

So... Functional reactive programming?

You are using "object oriented programming" by definition, the moment you formulate an "object" in code with interactions on that object (regardless of what language you're using)!

I completely agree with the arguments showing how harmful OOP abuse is (especially branching inheritance trees!), just like abusing pointers, or structs, or error codes, or abusing <insert_language_feature_here> is harmful. But what does that have to do with OOP inherently?

Let's look at historical crashes, glitches, and hopelessly messy code and try to count their actual source. I bet you'll find sloppy programmers as the #1 root cause (maybe null-able pointers and/or weak typing as #2). OOP isn't bad because it's OOP, it's bad because hordes of bad programmers use it at big companies who don't care about code quality.

Beautiful functional code is beautiful.

Beautiful procedural code is beautiful.

Beautiful object oriented code is beautiful.


Spend more time writing beautiful code that draws its elegance from the rich set of paradigms and tools we have; spend less time participating in fad paradigm-ocide movements :)

Containers are a particular case. almost nobody re implements a container. and you're talking and template, not OOP.

> How would you do this without some kind of "object orientation"?

Variables. Variables are not "objects". What is an "object" anyway ? It's a shallow abstract concept which is impractical.

> The moment you formulate an "object" in code with interactions on that object

No that's not what the article is trying to talk about.

What people argue against OOP is how it's applied to anything for any case. Creating new classes/type/abstract concept each time you want to realize something can often lead to code bloat. How many classes named "XXX_manager" can one encounter ?

> Sorry for shattering everyone's hopes and dreams of coding pure OOPless.

OOP tends to have state, which doesn't bode well will multithreading, which tend to favor functional programming. I don't know if the future is going to be massively multithreaded, but I just think that OOP is just another way to structure a program, nothing more. Processes and threads already are objects. Functions too. So why tell students to create new types all the time ?

I wonder if there are GUI widget systems not based on deep inheritance trees.

It seems to me that GUI and OO are deeply synergetic.

It's only slightly oversimplifying to say that OO was invented for a GUI, namely Ivan Sutherland's Sketchpad in the early '60s, and then developed with direct correspondence to physical objects in mind (Simula 67)

- http://en.wikipedia.org/wiki/Sketchpad

- https://www.youtube.com/watch?v=USyoT_Ha_bA

- http://en.wikipedia.org/wiki/Simula

- The Development of the Simula Languages in HOPL I if you can get it, else http://phobos.ramapo.edu/~ldant/datascope/simula%20history.p...

Sketchpad was prototype based, simula was early class based (though they didn't use those terms yet). It's likely they were developed mostly independently; OO just arises naturally when design is based on linguistic concepts like subjects, nouns, and things they can do.

What we know of UI today is more an artifact of smalltalk (both the first GUIs and smaltalk came out of parc and were related).

This is no accident. Smalltalk was developed at Xerox PARC and included an MVC interface toolkit. http://st-www.cs.illinois.edu/users/smarch/st-docs/mvc.html

I think the functional people took a whack at this question with functional, reactive programming. Some links:




Maybe some of these would interest you? https://www.haskell.org/haskellwiki/Applications_and_librari...

React, sort of, if you ignore the underlying clusterfuck that is the DOM.


OO just helps me understand the code (assuming that it is well-written, which isn't always the case). I don't understand the hate.

I have always imagined that it was just a natural progression from procedural code:

1. Put a bunch of functions in a file.

2. Ok, now we have too many functions to keep track of, so let's put functions in separate files.

3. That worked for a while, but now too much repeated code, let's put some functions into modules.

4. Now, we have all of these variables to keep track of, so let's hide some of them to lessen any confusion.

5. Etc. Etc. Etc.

6. Wow, now we have a complex language with all sorts of patterns that perhaps confuses things more than intended.

That's why I like languages that are lean and have pragmatic features. OO, functions, whatever works to make my life easier.

One thing in that article stands out:

"OO is the “structured programming” snake oil of the 90' Useful at times, but hardly the “end all” programing paradigm some like to make out of it."

In 2014, I think it's important to remember that "structured programming" was, basically, "hey let's use FOR and WHILE loops, and subroutines, instead of a big pile of GOTOs", and think about what that says about any author who describes it as "useful at times" and "snake oil."

Catchy and content-free soundbites that do nothing to further a discussion.

One of the big problems with discussions around OOP is that people have wildly different opinions on what OOP is. Whatever it turns out to be, it'll be a tool like any else that can be used efficiently or poorly.

I've read it before. It's a fluff piece for Erlang. You can tell that it's not a serious argument since he doesn't acknowledge why things are done a certain way in OOP and what the advantages are. He obviously things the cons outweigh the pros, but Armstrong doesn't have to feign ignorance on the matter.

I think inheritance is a really good idea, and that most problems with it result from people forcing inheritance onto problems not benefiting from it.

Can we all just make a resolution not to use any technique (no matter how fashionable, not even FP) on a problem unless it is a natural fit?

In this CppCon keynote, Mike Acton explains some of the problems with OOP and the benefits of data driven design. Really worth a watch: https://www.youtube.com/watch?v=rX0ItVEVjHc

I am a big proponent of functional programming, but there are situations where it would be veeerrryy convenient to have open types and inheritance, which is forbidden in a language like Haskell.

Modelling something like a GUI toolkit is very difficult in Haskell, because each widget has a different clump of data associated with it. Functional programming works well when the data is uniform, or at least known ahead of time. It's very difficult to add new data structures into the system in a dependent module, like if you want to add a custom widget type. In OO, this is trivial.

I'm not sure what the best solution is. People are researching alternative solutions (namely FRP), but it's too bad it's necessary in the first place.

> like if you want to add a custom widget type

Add a custom widget from a gui program itself? So it's a gui program that has a widget builder then? Why not have an abstract data type which lets you encode the properties of both of those?

I'm talking about supplementing a toolkit. Like, in iOS or gtk or QT or whatever, there are stock widgets (buttons, sliders, text boxes), but you can also add your own widgets, like a date picker. A date picker would be modeled completely differently from those other widgets, and so it needs a completely different data structure. This requires supplementing the existing types, which isn't possible in Haskell.

That date picker inherits from something though right?

Right. In OO, you can inherit, but in Haskell, you can't.

Also, I'm not sure why you'd need to inherit to provide this functionality. The user could just provide a monadic type the function to run the gui is expecting and extend it however they wish.

I won't claim to be an expert on this stuff-- I've only recently been studying this. But, from my understanding, if you have a tree of GUI elements, each of those elements needs to have the same type, and that type needs to be known ahead of time. It's possible I don't understand your comment. Would you mind pointing me to some sample code or blog post or something?

I've been working on leaving OO recently. This is my current API. No deep nested hierarchies of classes anymore!


People who generally code applications could learn a lot from game developers and game engines. I took inspiration from Stencil and HaXe (but using Swift and SpriteKit) http://www.stencyl.com

That's an excellent use of Composition. Most people think inheritance hierarchies are necessary for OO, but composition/aggregation is a much better way to do things.

A lot of the complaints I've read about OO center around people doing stuff that they later determine they didn't need to do, and then being stuck with it because it's in the middle of an inheritance hierarchy. Modern OO design would center around decomposing each class in the inheritance hierarchy such that the decomposition produces small subsets of shared behavior, and then composing those together to create the actual object. As a bonus you can still produce run-time polymorphism without crawling an inheritance hierarchy.

Composition in an OO system is very much like composition in mathematics. Two objects are combined to produce the desired behavior.

A lot of people crap on the GOF book because the patterns in it are either obvious or useless, but reading it does teach you some useful concepts, and for many people it will be their first direct exposure to something other than inheritance. Composition/Aggregation are great takeaways from that text.

As an example of a crappy inheritance hierarchy, a developer 20 years my senior had this four-layer inheritance hierarchy to represent 3 different data types. After he gave it to me I spent 15% of my time convincing him to let me eliminate half the classes he wrote. The worst part was that he was inheriting and then in the subclass he was writing functions that were semantically identical to superclass or cousin-class functions, but with different names. After I eliminated all that I was able to use templates to hide a lot of the mess.

OO design should be done according to the YAGNI principle--You Aren't Going To Need It--and its corrolary--do it as soon as you need it.

Hey, thanks!

Yeah, that's the impression I get. The ability to add functionality to an object in small modules (Components/Behaviours)

However this only works if there is a inherent structure and ability to use components. Most frameworks and libraries I have worked with don't really have that structure.

For instance, SpriteKit (Apples 2d framework) lacks a structure for building components. So most SpriteKit examples tend to have deep hierarchies of class inheritance.

If you'd want to take a look I wrote this https://github.com/seivan/SpriteKitComposition

Documentation is still lacking as I am playing around with function names but it has a test suites that demonstrates usage.

These are some great quotes. Caveat: I am old. Programming methodology debates wear me out as quickly as language or commenting convention debates :)

I certainly remember many of the quotes when they first came about. I’m a product of the object-oriented wave. At the time I felt I had all the arguments as to why things were so much better with (or so much worse without) OO. In many cases, looking back I realized I was basically arguing for better tools and/or slightly better adherence to a few conventions.

My lessons from going through each “language” transition from debates over “assembler is the only way” through today’s DSLs and more:

1. OO wasn’t good or bad intrinsically. The principles, however, can easily lead to more manageable or coherent code over time. By and large, inheritance, polymorphism, encapsulation, abstraction all form the foundation of large scale systems.

2. Languages can do a great deal of harm, not concepts. Far too often, engineers dive into a new language or paradigm and assume all the code needs to exhibit all the properties of the new religion. In the early days of C++ programming, the saying we had on our projects was “a framework is not a compiler test suite”. I think in all languages, especially today, the risk is that you more harm than benefit when you try to do everything in some fancy new way. Maybe there is a role of operator overloading or templates in C++ but I never really found it. But I am pretty certain nearly every framework employed these techniques. You can’t blame the language because every language has stuff you can abuse. You can blame the zealots or evangelists which often cause the most challenges.

3. Language and paradigm innovation benefits rarely scale in very large systems, but small systems early on exhibit amazing benefits. Most engineers are seduced by new languages and paradigms (OO was just the one that came after structured programming and before functional and others). In the beginning, the new language or approach is amazing. Always amazing. Over time the real world shows up and every new engineer feels that the code is bloated and needs a rewrite when they join a project. Efficiency declines. The magic fades as reality dominates. With more than a few engineers the complexity of interconnection between parts of the code base trumps the simplicity and elegance within one part of the architecture. Expressing those in paradigm or language elegant ways approaches a very high degree of difficulty over time. At one extreme we see competitions of “hello world” or the most basic app all being amazingly simple. At the other extreme we see a constant breakdown in even the most basic methodological approaches. Even “Goto” was hard to do without and certainly in OO maintaining a pure inheritance model, public/private data, or more become as close to impossible.

4. In algorithmic complexity terms, a language or paradigm is at best a constant factor improvement over any other choice. The age-old rule of thumb is that programmer productivity is language independent. While I have no doubt that one could not spin up a new social network in assembler, one would be equally hard pressed to write a device driver or graphics runtime in Ruby or Python. Part of why methodologies gain attention is because the runtimes/libraries/frameworks that come with them do the things that need to be done the way that people want them to work today---that’s what gives the appearance of improved productivity. The right libraries in C can serve a great purpose. We see this when a language gets a new library that seems to bring renewed interest to it.

5. Tools are everything. What makes or breaks a paradigm/language are tools. You can take a simple language or paradigm and have great tools and become much more productive than a “better” paradigm with poor or ill-suited tools. One way that this surfaced over time was with tools that generated the right code—interface builders for example. Then using complex, archaic, or intensely manual approaches lacking formal foundations would become much easier. Plus the bonus of transparency of code generation really helped because other tools could easily integrate (having access to a whole tool system is also more productive than any one methodology+tool).

Ultimately, I think OO is perfectly good and most all modern systems make use of the 4 basic pillars of the paradigm. I don’t think it ever became the answer to code reuse or code quality that proponents claimed. Ultimately the methodology is going to be trumped by scale and age of code and system. Any success means your ability to start over is reduced and so the best bet is to focus on knowing what principles your project is being created and run with.

> The principles, however, can easily lead to more manageable or coherent code over time.

This. 1000 times over. This is why well factored code bases can often seem to have lots of redundant classes and interfaces to inexperienced developers. It's all about maintainability over time, building bulkheads around change.

> tools that generated the right code—interface builders for example

The irony is that these tools often fail when the benefit is considered over time. This comes through a lack of tools - their code works badly with SCM (no domain-specific diff tools). They throw away everything we've learnt about software engineering just to win the marketing demo's.

How do redundant classes and interfaces improve maintainability over time?

Because they improve maintainability, they are not redundant.

I asked how redundant classes and interfaces improve maintainability over time.

I'll narrow down my question some more:

How does duplicated code improve maintainability over time?

Author stated that classes and interfaces seem redundant, not actually redundant. Repeated code could improve maintainability because it may actually have different reasons to change, and only look similar on the surface. I've seen this often missed by inexperienced devs who get overzealous trying to DRY up everything.

> Maybe there is a role of operator overloading or templates in C++ but I never really found it.

There is. For templates I'd think the role is obvious, for operator overloading, at least operator-> is indispensable.

Many times though, the operator overloading is just a way to hide what's really going on, making it harder for the programmer to have a good grasp of either what the error conditions might be or why there's an expensive operation going on in a particular statement.

Operator overloading is just a function call. Everything you say about overloading an operator could be said about the function call that would replace the overloaded operator. I've never had trouble recognizing that a use of an operator was being done with a non-primitive type, which is the only way the confusion you describe could possibly happen, and the problem you describe has never happened to me while working on C++ code. I don't know why you think it is even plausible.

Operator overloading is a syntactic sugar trick that hides the real method call. Nothing more, nothing less.

It's useful in vector math, complex number math, matrix math.

Operator overloading is useful for a lot more than just math. Overloading * and -> is useful for pointer-like types. Overloading () is useful for function-like types. Overloading [] is useful for collections and overloading ++ and -- is useful for iterators.

The overloading of (), [], and the like is used but it's not necessary. They could have been other named overloaded functions.

A function call is always a function call, an operator is not. It makes for more opaque code.

Function calls aren't always function calls.

> Tools are everything. What makes or breaks a paradigm/language are tools

I completely agree. In my experience, and it's probably the same for many people, I spend a non-trivial amount of time not writing "primary" code. Instead I'm testing, debugging, dealing with build issues and dependencies, packaging, wrangling deployments etc...

> Maybe there is a role of operator overloading or templates in C++ but I never really found it.

Operator overloading like using + on strings or adding n-space vectors to other n-space vectors is at least convenient, isn't it?

But at the same time, something that could be achieved by something like Haskell typeclasses, e.g. a "concatenatable" typeclass - which then gives you more polymorphism for free than you originally anticipated!

Defining an instance of "concatenable" (eg, Monoid) for some type in Haskell is pretty much isomorphic to overloading some free function (or operator) on some type in C++. It's the same "amount of polymorphism" either way, namely ad-hoc polymorphism.

I'm in some programming class in france. The teacher argued that OOP and encapsulation makes programs more secure. It really felt like a political statement.

The day I understood that class and struct are merely the same thing, I questionned the purpose of private: and public:.

To be honest, I think the sole purpose of those construct is just to hide proprietary code when you sell libraries and deliver a set of binaries and headers, to force the compiler to forbid you from writing to, or using protected member. It's really moot. It's just a coding practice, it's not some grand way to think and construct applications.

OOP is nice if you want to create some non standard simple type, like Vec3, or when you want to use an interface for something quite complex. Templates are a good extension of OOP, but it's not used that much, even while the STL justifies the existence of templates. But other than that, when you want to make an end-user application which is not a library, I think OOP is not that much useful. Too few programmers write libraries, most just write applications, and applications don't need OOP.

If you use OOP to pretend that your code is reusable, please, first check if what you're making really is worth reusing, chances are it's not.

TL;DR: OOP is good to design bricks. Most coders are not brick designers, thus most programmers should not touch OOP that much.

Public and private has nothing at all to do with hiding proprietary code. Encapsulation is important regardless of whether you use OOP or not. Even if you program something alone that will never be derived/re-used, you should strive to have clean boundaries between subsystems. You also want to make things read-only unless they should change (most things should not)

If you make everything public and writable you have to spend a lot of thought on things that a compiler could help you with so you can focus on solving the problem. Much like using a type system.

If you're using public and private for security you've missed the point. The idea is to help other developers use your code correctly by only exposing the methods they should use, instead of everything.

> I'm in some programming class in france. The teacher argued that OOP and encapsulation makes programs more secure. It really felt like a political statement.

Really depends on what he meant by secure.I'm french too.Encapsulation isnt about shipping proprietary binaries.You dont need OOP for that,furthermore,almost anything can be decompiled so your point is moot.

You'll understand when you work with a team,you'll understand the value of encapsulation, and forcing every developper to work with well defined interfaces that cannot be violated.Encapsulation makes communication easier,testing easer,it makes everything easier.

> Templates are a good extension of OOP, but it's not used that much, even while the STL justifies the existence of templates.

I would argue templates have little to do with OOP.It's a tool that allows genericity.You could have generics in a non OO language.

> It's just a coding practice, it's not some grand way to think and construct applications.

No it's a feature not a practice.The alternative is ugly stuff like in javascript where you put _ before prototype methods to express an intent. So much that ES6 comes with symbols because it shouldnt be just an intent.A developper should be able to express what he means,no just an intent(still possible in ES5,with priviledge methods but noT compatible with prototypal inheritance).

OOP is a tool,encapsulation is one of the most important things in OOP.Wether a language implement explicit encapsulation or not is of course an implementation detail.

Object Oriented paradigm started when procedural was the most popular way to code. There were problems with procedural approach that made understanding and working in large applications very tough. Before OOPs came around, coders tried encapsulating their code by separating them into separate files. This however solved only one problem and that too only partially.

Coders also tried revolving their procedures (casually called functions) around their objects - get_student_name(struct student), calculate_ranks(struct student_list) and so on.

It was necessary to have a paradigm that represented code very close to how we see this world. OOPs just had to happen.. it was not avoidable.

Now, the challenge lies in the fact that with everything powerful, one can go ahead and use it incorrectly. There comes the harm. But then there are better ways to learn the concept. I wrote a blog about OOPs recently. Read it here http://wp.me/p5jxzK-1b

Go ahead and use it, OOPs isnt going anywhere soon and it aint that harmful :)


I am using object-oriented programming (OOP) and like it! I'm using OOP via Microsoft's Visual Basic .NET, the .NET Framework, ASP.NET, ADO.NET, etc.

For inheritance, I've never tried to understand it or use it. Yes, the .NET Framework apparently uses inheritance, but there as far as I can see mostly the inheritance is mostly just imaginary, mostly just a means of documentation. E.g., I can have a variable X of type byte, integer, float, GUID, etc. and write X.ToString, that is, pass (it really is essentially call-return semantics) the method ToString inherited from somewhere, some abstract class or some such, but I don't believe that there is actually any real inheriting going on. Maybe there is some overloading, but then the spelling 'ToString' is just a convenience for documentation and usage. Or, just as easily it could be that in case the type of X was integer, then I could write X.ToStringInt where the method ToStringInt was for converting integers to strings. Fine with me. I know; I know: If I can write X.ToString when the type of X is an integer, then where X is declared I can change its type to, say, GUID and keep the code X.ToString and, thus, get code rewuse, but I'd nearly never do that! Instead I'd want to check over my code to be sure the code still did what I wanted it to do.

Mostly I look at instances of classes much like in PL/I where there were data structures like

        n_points   Integer,
        1 A        Based,
          2 Coordinates( n_points ),
            2 X  Float,
            2 Y  Float,
          2 Lengths (n_points) Float;
So, in Visual Basic .NET could write

OO has been tremendously successful, it has allowed us to build systems that we wouldn't have dreamed of before. The Web was invented on a NeXT in Objective-C.

Reuse in UI frameworks has been great, and I've had similar success with custom frameworks.

However, success invariably contains the seeds of failure, because success means that we are taken to the limits of applicability. To me, those limits were visible in the 90ies when I wrote my Master Thesis[1], issues like the non-composability of frameworks, the runtime/compile-time, composition/inheritance dichotomies, architectural mismatch etc.

Alas, nothing really happened, and IMHO, things actually got worse. At the time, I had two candidates for "the future", one being AOP and the other software architecture. AOP was a dud, but I am very hopeful about better linguistic support for software architecture, so much that I am creating a programming language that has software architecture as its organizing principle, deriving other paradigms such as OO from this base [2].

Having been exposed to FP early on, I have to admit I don't understand the current hype, because it seems to primarily address issues of "programming in the small" (tight coupling), although some of the lessons (communicate using simple data) are applicable elsewhere, and have been discovered elsewhere. I don't see a large-scale distributed system like the WWW built in the FP-paradigm, but happy to be corrected!

[1] http://www.metaobject.com/papers/Diplomarbeit.pdf

[2] http://objective.st/

AOP isn't a dud. Google Guice is AOP.


Taking the definitions from "Software Architecture: Perspectives on an Emerging Discipline" [1], you have the following:

- components

- connectors

- configurations (systems)

Guice (and other dependency injection frameworks) clearly address the third part: configurations. AOP is, at best, an implementation technique.

[1] http://www.amazon.com/Software-Architecture-Perspectives-Eme...

According to these guys, everything complex is harmful and everything simple is great. I want to start learning Assembly to finally understand the whys of programming on a hardware level. Is Assembly harmful? Any simpler and thus better alts? Or is Assembly a simpler version of some other ugly, threatening monster? Oh, btw, where's the best place to learn it?

Congratulations, you just discovered the great RISC vs CISC debate.

Separation of concerns, reuse, and modularity are the key principles here.

I'm a fan of hybrid OO/functional approaches, use classes but minimize side effects. Return new objects when relevant, etc.

It's a mistake to misjudge OO based on some over-bloated java-design or some underdesigned implication.

The phrase "everything in moderation" largely applies.

Given, if you are doing things in C, and passing a structure as a first argument, you are getting most of the way there.

I think inheritance is frequently over-applied, interfaces are a great idea (or even just duck typing), but not everything is an inheritance hierachy.

Rather, encapsulation is the most powerful concept.

Much of what gives OO a good or bad name is in the hands of who is doing the architecture - abstracting things too early leads people to occasionally go to the extreme other side of the fence.

There's a balance to be had, and advantage to learning from multiple schools of thought.

I really liked this link: http://harmful.cat-v.org/software/OO_programming/_pdf/Pitfal... from the server. It is from 2009 so a bit dated, but it shows something which is prevalent today, objects obscure what they are doing, and they mix code and data, but data access is a lot slower than it used to be so data optimization is more important than ever.

That has only gotten more true with DDR3 and DDR4 memory and giant caches. It is significant enough that it would behoove compiler optimizers to pause trying to do things with fewer instructions and start looking at better data organization.

I love the banana analogy :).

“The problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.” – Joe Armstrong

OOP has the same relationship to programming that formalist literary critique does to writing: the willful denial of connection to reality. When you're solving a problem that's connected to the real world--perhaps you have a deadline, or a live human being will use your program at some point--it's often better to just go ahead and build something that works and can be easily explained to the person who will maintain it in the future.

As with formalism you should know how OOP works and how seriously others take it, so you don't get into fights with your relatives at Thanskgiving.

And first of all, learn (in general, not you) that OOP doesn't mean the same to all programming languages, before complaining about OOP in general.

The thing that I learned from SICP isn't that functional is greater than OO, but that functional and OO are both design strategies that can assist you in designing a solution to a problem.

The problem with OO - and this applies to Smalltalk too - is that class hierarchies are a bad fit for a changing business environment, being hard to refactor when previous assumptions change: inevitably some new case will come along which requires changes to the structures that were originally defined, and OO is harder to deal with in that context.

That said, I do think OO is good for problem areas that actually have fairly static interfaces, like GUI libraries, for instance

Object-oriented programming is a product of personal computing systems like Smalltalk and Self.

This approach begins to suffer when application state is mixed between a remote server and a local client. Further, the physical metaphors of OOP suffer when removed from Smalltalk-like systems.

The rise of functional programming has as much to do with the rise of networked computing and shared data than anything else.

Err... OOP isn't better/worse than any other approach, but there are some criticisms that I can't disagree with... but I like having objects when work requires some book-keeping

Programming is like kung-fu. You gotta pick and choose the best of each style, when they are relevant.

Who is this "cat -v" guy anyway, they have a lot of badly written bad opinions on a lot of stuff.

( http://harmful.cat-v.org/society/gay_marriage http://harmful.cat-v.org/political-correctness/girls-in-CS http://harmful.cat-v.org/economics/fair_trade )

"I treat my female coworkers with respect, I politely discuss technical stuff with them if they feel like. I do make sexist jokes if I was able to get to know them sufficiently before, like any healthy male."[0]

Wow. Any health male makes sexist jokes to women if he gets to know them? I guess I'm just not very healthy then.

"My wife’s male coworkers behave the same way and I have no problem with that."[0]

Easy to say you don't have a problem with something that doesn't affect you!

0: http://harmful.cat-v.org/political-correctness/girls-in-CS

I'm using object-oriented programming (OOP) and like it. For the problems described in the OP, I'm not encountering those!

I want to write relatively simple code, thus, want to avoid tricky features of the software tools I use, and so far have been successful. But this desire means that I don't have deep experience with tricky aspects of OOP. So, YMMV.

So far mostly my code looks just like it would have before OOP; I've created only a few OOP classes; and all of those are simple. I write a lot of functions but not many classes.

Some things I do like:

(1) Can have an array of classes and, then, can sort the array with whatever class properties want to use as the sorting keys.

(2) Can serialize an instance of a class to a byte array, send the byte array via TCP/IP, and deserialize the result.

Well, the OP notes some of the dangers of inheritance. I agree and saw the danger right away and, thus, in the code I write try not to use inheritance and so far have been fully successful.

But, I can think of situations where inheritance could be useful and keep the work better organized than not using inheritance.

But, I'd say: The main issue is just having humans understand the code, and for that, using inheritance or not, the main solution is just to document what is going on.

Or, if the way around using inheritance is just having multiple copies of the same source code, then just document this fact so that when want to change one of the copies, likely change all the others, also. And, of course, generally should know all the places in the code base where that code, or some modification of it, is being used. Can solve such an issue with just documentation.

Or we don't want to look to programming language syntax and features to solve all problems of meaning in that software, meaning better communicated with documentation.

Gee, so far this is a polymorphic post since I didn't way what OOP language I'm using! So, I'm writing in Microsoft's Visual Basic .NET with their .NET Framework, ASP.NET (for Web pages), and ADO.NET (for using SQL Server). That Microsoft software is awash in classes, and that architecture seems to be working well.

Yes, that Microsoft code is awash in inheritance, but mostly I just ignore that fact and regard it as more just documentation than actual software. I get by mostly ignoring inheritance because I don't use it directly in my code.

In my project, the hard technical work was the applied math; it turns out, given the math, the corresponding code is simple.

OOP is still good for gaming and simulations in general.

This old bag again.

I've always had a problem with object orientation and have never embraced it.

But none of these quotes (and most dev thinking) seems to share my reasoning.

My problem: almost every argument for or against object orientation is about us, the developers. I rarely hear any arguments that consider our users or customers. Oh sure the usual (and lame), "It helps us serve them better."

I long ago lost track of all the lame bullshit (far too many to mention, but you know the culprits) that was supposed to revolutionize the way we build things without ever taking our users into consideration. Most of it was to make developers who couldn't build what was really needed appear as if they could. This has helped consulting firms and enterprise I.T. departments justify their rates and schedules, but has added little to the customers' benefit.

If the people who dream this shit up would stop focusing on what we need for 5 minutes and consider what they need, we'd all be way better off.

How has object orientation helped my customers? Frankly, I can't think of a thing. Add that quote to this list.

What other way do you help the end users in your language design other than making the language easier to make good products with? Nothing you said makes any sense.

Stating that object orientation doesn't help customers isn't the same as proving it, or even arguing it. How has imperative, or any other kind of programming, helped my customers?

> How has object orientation helped my customers? Frankly, I can't think of a thing. Add that quote to this list.

No. Why? Because it's bad. Not just prosaically, but because you haven't even bothered to make a point.

> [...] Most of it was to make developers who couldn't build what was really needed appear as if they could. This has helped consulting firms and enterprise I.T. departments justify their rates and schedules, but has added little to the customers' benefit.

It sounds like you think OO: 1) has been overblown to justify high salaries

> OO is the “structured programming” snake oil of the 90'

and 2) has added needless complexity

> “The problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.”

See? Your point was there, it was just actually said in a substantive, humorous way.

> “The problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.”

I'd like to see a real case where something like this happened in code. It never happens to me.

Developers are users. In fact you should consider your first user as the developer, since systems are still being written by humans. That said, you're right, object-orientation does not serve the humans. Neither does the semi-colon. Or the for loop. Or ...

I feel that OO imposes some semblance of structure using an intuitive semantics most "average" developers can grok (Not crazy stuff like Monads) , thereby allowing Pointy Haired Bosses to treat devs as commodities to be replaced at will.

Java is a language right at the sweet spot on that continuum. Just advanced enough to prevent most stupid bugs but also mediocre enough that most devs can grok it.

In the words of one of its creators :

"We managed to drag them half way to Lisp"

And how would you program an enterprise Java App without object orientation ?

> (Not crazy stuff like Monads)

Read this and tell me that "crazy stuff like Monads" is an accurate statement:


Why does an enterprise app have to be implemented in Java (or any other language with similar semantics)? Just because you can doesn't necessarily mean that it's the best way.

Ideally, unless your customers are other devs, OO has nothing to do with your customers.

Good RESTful design and web URLs are centered around nouns and objects.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact