Hacker News new | past | comments | ask | show | jobs | submit login
Why OO Sucks by Joe Armstrong (2000) (otago.ac.nz)
656 points by andrewl 30 days ago | hide | past | web | favorite | 380 comments



It really should be noted that years later Joe changed his mind about OO and came to the realization that perhaps Erlang is the only object-oriented language :) From a 2010 interview:

..."I wrote a an article, a blog thing, years ago - Why object oriented programming is silly. I mainly wanted to provoke people with it. They had a quite interesting response to that and I managed to annoy a lot of people, which was part of the intention actually. I started wondering about what object oriented programming was and I thought Erlang wasn't object oriented, it was a functional programming language.

Then, my thesis supervisor said "But you're wrong, Erlang is extremely object oriented". He said object oriented languages aren't object oriented. I might think, though I'm not quite sure if I believe this or not, but Erlang might be the only object oriented language because the 3 tenets of object oriented programming are that it's based on message passing, that you have isolation between objects and have polymorphism.

Alan Kay himself wrote this famous thing and said "The notion of object oriented programming is completely misunderstood. It's not about objects and classes, it's all about messages". He wrote that and he said that the initial reaction to object oriented programming was to overemphasize the classes and methods and under emphasize the messages and if we talk much more about messages then it would be a lot nicer. The original Smalltalk was always talking about objects and you sent messages to them and they responded by sending messages back."

See https://www.infoq.com/interviews/johnson-armstrong-oop (2010) for the full answer (and more), it's worth a read.


That speaks to one of the things that bothers me about OOP's intellectual traditions: there are two different ideas of what "object" can mean, and most object-oriented languages and practices deeply conflate the two.

On the one hand, "object" can mean a unification of data structures with the procedures that act on them. In this view, the ideal is for everything to be an "object", and for all the procedures to actually be methods of some class. This is the place from which we get both the motivation for Java's ban on functions that don't belong to classes, and the criticism of Java as not being truly OO because not every type is an object. In this view, Erlang is not OO, since, at the root, functions are separate from datatypes.

On the other hand, "object" can describe a certain approach to modularity, where the modules are relatively isolated entities that are supposed to behave like black boxes that can only communicate by passing some sort of message back and forth. This ends up being the motivation for Java's practice of making all fields private, and only communicating with them through method calls. In this view, Erlang is extremely OO, for all the reasons described in parent.

I haven't done an exhaustive analysis or anything, but I'm beginning to suspect that most the woes that critics commonly describe about OO come from the conflation of these two distinct ideas.


dont forget inheritance. its either orthogonal or essential to what it means to be 'object oriented' depending on who you are talking to.


I haven't, but, at least insofar as my thinking has developed (and insofar as Erlang supports it), the question of inheritance is more orthogonal than essential to the specific point I was trying to make. And failed to state clearly, so here it is: This essay is right, and Armstrong is also right when he said "Erlang might be the only object-oriented language". The tension there isn't, at the root, because Armstrong was confused about what OOP is really about; it's because OOP itself was (and is) confused about what OOP is really about.

That said, I would also argue that, like "object", "inheritance" is a word that can describe many distinct concepts, that and here, too, OOP's intellectual traditions create a muddle by conflating them.


> dont forget inheritance

Inheritance is a limited convention to do mixins. Including it in the abstract idea of object oriented programming is harmful, other than in reference to the ugly history of "Classical OOP" or "Non-Kay OOP" as you like.


“I mainly wanted to provoke people...” I hate this. I see it way too often. It’s either a cop-out to avoid having to own up to your arguments or its just poisonous rhetoric in the first place that contributes to partisan opinions, especially when the speaker has an air of authority that causes people to accept what they say at face value. It is directly antithetical to critical thinking.


> It is directly antithetical to critical thinking.

I don't think it is.

Yes, it can get some people to just lash out in response.

But it also often forces people to think critically about how to convincingly justify their own standpoint to counter the provocation. This can be particularly useful when a viewpoint has "won" to the extent that people just blindly adopt it without understanding why.

It does have it's problems in that it is hard to predict, and there's a risk that measured reactions gets drowned out by shouting, so I'm not going to claim it's a great approach, but it has it's moments.


True, I can see how in this case, at that time, it could be effective. But ironically, there seems to be a similar dogma surrounding FP these days - speaking even as a fan of the paradigm, with a perspective tempered by experience. I can’t help but think that polarized viewpoints like this contribute to replacing the subject of the idealization rather than the underlying problem of idealizing itself, if only indirectly due to the combination of the arguments themselves and the sense of authority behind them, rather than the merit of the arguments alone.


>This can be particularly useful when a viewpoint has "won" to the extent that people just blindly adopt it without understanding why.

Like the blind acceptance of OOP religion (not the message passing kind), since the 90s


Isn't a method call a message, and the return value a message back? Or is it that "true OO" must be asynchronous?


> Isn't a method call a message, and the return value a message back?

It is!

In my view, the point that Alan Kay and Joe Armstrong are trying to make is that languages like C++/Java/C# etc have very limited message passing abilities.

Alan Kay uses the term "late binding". In Kay's opinion, "extreme late binding" is one of the most important aspects of his OOP [1], even more important than polymorphism. Extreme late binding basically means letting the object decide what it's gonna do with a message.

This is what languages like Objective-C and Ruby do: deciding what to do after a method is dispatched always happen during runtime. You can send a message that does not exist and have the class answer to it (method_missing in Ruby); you can send a message to an invalid object and it will respond with nil (Objective-C, IIRC); you can delegate everything but some messages to a third object; you can even send a message to a class running in other computer (CORBA, DCOM).

In C++, for example, the only kind of late binding that you have is abstract classes and vtables.

-

> Or is it that "true OO" must be asynchronous?

It doesn't have to be asynchronous, but in Alan Kay's world, the asynchronous part of messaging part should be handled by that "dispatcher", rather than putting extra code in the sender or the receiver.

I don't remember Alan Kay elaborating on it, but he discusses a bit about this "interstitial" part of OOP systems in [2]

-

[1] - https://en.wikipedia.org/wiki/Late_binding

[2] - http://wiki.c2.com/?AlanKayOnMessaging


C++'s vtable is also late binding, since you don't know which implementation you're calling until runtime. And there's no such thing as "extremely late binding".

> In C++, for example, the only kind of late binding that you have is abstract classes and vtables.

That's not true, you can always have a "send_message(string id)". Few people do it because you lose static type safety. And some languages, like C# and Scala, have dynamic types that allows for the "method_missing" protocol and such features are very unpopular.

To be honest I don't see much of a difference. I've worked with a lot of dynamic OOP languages, including with Erlang-style actors and I've never seen the enlightenment of dynamic OOP message passing.

And I actually like OOP, but I don't really see the point of all this hyperbole about Smalltalk.


> That's not true, you can always have a "send_message(string id)". Few people do it because you lose static type safety. And some languages, like C# and Scala, have dynamic types that allows for the "method_missing" protocol and such features are very unpopular.

That is the difference. If every class in C++ had only one method - send_message and each object is an independent thread, you will get how Erlang works. That is how you would do the actor model in C++.

Inheritance, Polymorphism is emphasised in Java, C++ and C#, whereas Functional programmers emphasise function objects / lambdas / Command Pattern where you just have one method - calling the function. Infact having just method you no longer need Polymorphism / Interfaces.


What? This has nothing to do with functional programming.

FP needs polymorphism too and as a matter of fact FP tends to be even more static.

In FP we have type classes, built via OOP in static OOP languages.

> Infact having just method you no longer need Polymorphism / Interfaces.

That’s false.


It's not. You can use multiple dispatch.


But then why would you? Isn't ditching type safety a bad idea most of the time ?


The only good thing about OO as a architecture is, that there is nearly no education required to introduce it to the most novice in the field. Its basically the default thinking approach rebranded. It comes with all the benefits of a mental model - quick orientation, and all the negative of a mental model. (Badly adapted to fit to machine execution, after acertain complexity level is reached - god like actorobjects - basically programmers in softwaredisguise- start to appear).


Disagree. The original design patterns book was really about ways oop should be used that don't fit people's everyday conception of objects. (Of course that causes different problems for the novice keen to use the patterns but that's another story)


> C++'s vtable is also late binding, since you don't know which implementation you're calling until runtime. And there's no such thing as "extremely late binding".

C++'s vtables are determined at compile time. The specific implementation executed at a given moment may not be possible to deduce statically, but the set of possible methods is statically determined for every call site: It consists of the set of overridden implementations of the method with that name in the class hierarchy from the named type and downwards.

No such restriction exists in Ruby or Smalltalk or most other truly dynamic languages. E.g. for many Ruby ORM's the methods that will exist on a given object representing a table will not be known until you have connected to the database and read the database schema from it, and at the same time I can construct the message I send to the object dynamically at runtime.

Furthermore the set of messages a given object will handle, or which code will handle it can change from one invocation to the next. E.g. memoization of computation in Ruby could look sort-of like this:

    class Memo
      def method_missing op
         result = ... execute expensive operation here ...
         define_singleton_method(op) { return result }
      end
    end
After the first calculation of a given operation, instead of hitting method_missing, it just finds a newly created method returning the result.

"Extreme late binding" is used exactly because people think things like vtables represent late-binding, but the ability to dynamically construct and modify classes and methods at runtime represents substantially later binding.

E.g. there's no reason why all the code needs to be loaded before it is needed, and methods constructed at that time And incidentally this is not about vtables or not vtables - they are an implementation detail. Prof. Michael Franz paper on Protocol Extension [1] provided a very simple mechanism for Oberon that translates nicely to vtables by dynamically augmenting them as code is loaded at runtime. For my (very much incomplete) Ruby compiler, I use almost the same approach to create vtables for Ruby classes that are dynamically updated by propagating the changes downwards until it reaches a point where the vtable slot is occupied by a different pointer than the one I'm replacing (indicating the original method has been overridden). Extending the vtables at runtime (as opposed to adding extra pointers) would add a bit of hassle, but is also not hard.

The point being that this is about language semantics in terms of whether or not the languages allows changing the binding at runtime, not about the specific method used to implement the method lookups semantics of each language - you can implement Ruby semantics with vtables, and C++ semantics by a dictionary lookup. That's not the part that makes the difference (well, it affects performance)

> That's not true, you can always have a "send_message(string id)". Few people do it because you lose static type safety. And some languages, like C# and Scala, have dynamic types that allows for the "method_missing" protocol and such features are very unpopular.

If you're working in a language with static typing you've already bought into a specific model; it's totally unsurprising that people who have rejected dynamic typing their language choice will reject features of their statically typed language that does dynamic typing. I don't think that says anything particularly valuable about how useful it is. Only that it is generally a poor fit for those types of languages.

[1] Protocol Extension: A Technique for Structuring Large Extensible Software Systems, ETH Technical Report (1994) http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.42....


I'm surprised the actor model hasn't been mentioned. Isn't this the modern name for what theyre talking about?

Completely independent objects passing messages and entirely parallelizable.



My first exposure to the actor model was with Akka on Scala. After working with it for a little while, I thought "this is what OOP should be, perhaps I just hate broken implementations of OOP (i.e., Java, C++), rather than OOP itself." Heck, I like Ada95's implementation of OOP better than Java's.

I keep meaning to give Erlang a try, but just haven't had a reason yet. I do a lot of Clojure, these days :)


If you like Akka/Scala, definitely give Erlang a try.


I can highly recommend Elixir as a pleasant entry point. I've been looking into learning Erlang too though, much as the syntax is a bit daunting.


what does late binding by you? That sounds like an argument for non-strictly typed languages. Isn't in the strict typing that prevents late binding? The compiler wants to know at compile time the types of all the messages and whether or not an object can handle that message hence all messages must be typed and all object must declare which messages they accept.


> what does late binding by you?

Some things that come to mind:

- Abstract classes/methods, and interfaces. This is implemented using vtables in C++.

- Ability to send messages asynchronously, or to other computers, without exposing the details of such things. You just call a method in another class and let your dispatcher handle it. There was a whole industry built around this concept in the 90s: CORBA, DCOM, SOAP. And Erlang, of course, in a different way.

- Ability to change the class/object during runtime. Like you can with Javascript and Lua, calling `object.method = `. Javascript was inspired by Self (a dialect of Smalltalk), so there's that lineage. Other languages like Python and Ruby allow it too.

- Ability to use the message passing mechanism to capture messages and answer them. Similar to Ruby's "method_missing" and ES6 Proxies in Javascript. This is super useful for DSLs and a great abstraction to work with. Check this out: http://npmjs.com/package/domz

Remember that you can have some of those things without dynamic typing (Objective-C).


Objective c is a compiled language


Objective C is compiled, yes, but like Smalltalk OOP, the target object of the message is resolved and interpreted by that object at runtime.


Objective-C is much closer to true object orientation than C++, but IMO Apple neutered it by having the program crash if there was no message handler.


It crashes only if you let it.

a) The crash is from the default unhandled exception handler, which will send a signal to abort. So if you just want to crash, you can either handle that particular exception or install a different unhandled exception handler

b) An object gets sent the -forwardInvocation: message when objc_msgSend() encounters a message the object does not understand. The exception above gets raised by the default implementation of -forwardInvocation: in NSObject.

    o := NSObject new.
    o class
    -> NSObject
    n := NSInvocation invocationWithTarget:o andSelector: #class
    n resultOfInvoking class 
    -> NSObject
    o forwardInvocation:n 
    2019-04-22 07:49:12.339 stsh[5994:785157] exception sending message: -[NSObject class]: unrecognized selector sent to instance 0x7ff853d023c0 offset: {
(This shows that -forwardInvocation: in NSObject will raise that exception, even if the NSInvocation is for a message the object understands)

If you override -forwardInvocation:, you can handle the message yourself. In fact, that is the last-ditch effort by the runtime. You will first be given the chance to provide another object to send the message to ( - (id)forwardingTargetForSelector:(SEL)aSelector; ) or to resolve the message in some other way, for example by installing the method ( + (BOOL)resolveInstanceMethod:(SEL)sel; )[0].

Cocoa's undo system is implemented this way[1], as is Higher Order Messaging[2][3]

[0] https://developer.apple.com/documentation/objectivec/nsobjec...

[1] https://developer.apple.com/documentation/foundation/nsundom...

[2] https://en.wikipedia.org/wiki/Higher_order_message

[3] https://github.com/mpw/HOM/blob/master/HOM.m


Back when I wrote a lot of obj-c is when I really 'got' message passing vs. a function call. I miss obj-c, but everyone wants to move on to Swift.


Wasn't the design decision (and implementation) involved in place log before Apple had anything to do with it?


NextStep adopted it but did not invent it. Once Apple acquired NextStep and released OS X they were the only major company supporting it and had defacto control over the language.

The complaint I have is with NSObject which can be blamed on Next Step. Although another comment pointed out I just didn’t know about a workaround.


There were two different major mutually-incompatible “flavors” of Objective-C (my first book on Objective-C covered both, and my first Objective-C programming was done on a NeXTcube), one of which originated at NeXT (NextStep was the OS that was NeXTs last major surviving product after they dropped hardware, not the company.)


Extreme Late binding: for "The Pure Function Pipeline data Flow", attaching data or metadata to the data flow, then the pipeline function parses it at run time, which is simpler, more reliable, and clearer.


C++, Java etc. all lack proper union types with appropriate pattern matching. So a lot of useful message passing patterns cannot be implemented without too much boilerplate.


I think the spirit of OO, an object has agency over how the message is interpreted in order for it to be considered a message. If the caller has already determined for the object that it is going to call a method then the object has lost that agency. In a 'true OO' language an object may choose to invoke a method that corresponds to the details within the message, but that is not for the caller to decide.

Consider the following Ruby code:

    class MyClass
      def foo
        'bar'
      end
    end

    class MyClass
      def method_missing(name, *args, &block)
        if name == :foo
          return 'bar'
        end
        super
      end
    end
To the outside observer, the two classes are effectively equivalent. Since, conceptually, a caller only sends a message `foo`, rather than calling a method named `foo`, the two classes are able to make choices about how to handle the message. In the first case that is as simple as invoking the method of the same name, but in the second case it decides to perform a comparison on the message instead. With reception of a message, it is free to make that choice. To the caller, it does not matter.

If the caller dug into the MyClass object, found the `foo` function pointer, and jumped into that function then it would sidestep the message passing step, which is exactly how some languages are implemented. In the spirit of OO, I am not sure we should consider such languages to be message passing, even though they do allow methods to be called.


Is it unreasonable to think of the method as a semantic "port" to which messages (arguments) are passed?

And languages that allow programmers to bypass OO with jmp instructions seem multiparadigm rather than not-OO...


> semantic "port"

Not unreasonable at all! In fact the term used in objective C and Ruby is “selector”. Beneath the synchronous veneer anyway.


vtables is an implementation detail. To compile Ruby with vtables, consider this:

    class A
      def foo; end
    end
    
    class B < A
      def foo; end
      def bar; end
    end
Now you make a vtable for class A that looks conceptually something like this:

    slot for foo = address_of(A#foo)
    slot for bar = method_missing_thunk(:bar)
And a vtable for class B that looks like this:

    slot for foo = address_of(B#foo)
    slot for bar = address_of(B#bar)
The point being that you can see every name used in a method call statically during parsing, and can add entries like `method_missing_thunk(:bar)` to the vtable, that just pushes the corresponding symbol onto the stack and calls a method_missing handler that tries to send method_missing to the objects.

You still need to handle #send, but you can do that by keeping a mapping of symbols => vtable offset. Any symbol that is not found should trigger method_missing; that handles any dynamically constructed names, and also allows for dynamically constructed methods with names that have not been seen as normal method calls.

When I started experimenting with my Ruby compiler, I worried that this would waste too much space, since Ruby's class hierarchy is globally rooted and so without complicated extra analysis to chop it apart every vtable ends up containing slots for every method name seen in the entire program, but in practice it seems like you need to get to systems with really huge amounts of classes before it becomes a real problem, as so many method names gets reused. Even then you can just cap the number of names you put in the vtables, and fall back to the more expensive dispatch mechanism for methods you think will be called less frequently.

(redefining methods works by propagating the new pointer downwards until you find one that is overridden - you can tell it's overridden because it's different than the pointer at the site where you started propagating the redefined method downwards; so this trades off cost of method calls with potentially more expensive method re-definition)


What is the advantage of doing that instead of using an IObservable that can filter on the event name in C# or, even better in F#, having an exhaustive pattern match that automatically casts the argument to the expected type and notifies you at compile time if you forgot to handle some cases?


In Kay's OO the only way to interact with an object was through method passing. It was important the the internal state of an object was kept private at all times.

Getters/setters are technically message-passing methods, but they undermine the design goal because they more or less directly expose internal state to the public world.

But we see getters/setters used constantly. People don't use OO in the way Kay intended. Yes, methods are the implementation of the whole "message passing" thing Kay was talking about, but we see them used in ways he did not intend.


In my experience, getter/setter abuse is always an attempt to use classes as a structs/records.

I wonder if we had different syntax for those cases we'd have less of them.

But then, again, it's very convenient to be able to add a method to a class that was previously a dumb struct.


Maybe I am a complete philistine but is that really a bad thing or just something which goes against their categorism? I get that there are some circumstances where setters would break assumptions but classes are meant to be worked with, period.


Objects are meant to have a life cycle in which the state should only be changed by the object itself. Setters violate this idea by allowing the sender of the message direct control over the state of the object.

A simplistic example: account.deposit(100) may directly add 100 to the account's balance and a subsequent call to account.balance() may answer 100 more than when account.deposit(100) was called. But those details are up to that instance of the account not the sender of those messages. The sender should not be able to mutate account.balance directly, whether it be via direct access to the field or through the proxy of a setter.


Well... a setter is the object changing its own state. That's why the setter has to be a member function.

I would say instead that an object shouldn't have setters or getters for any members unless really necessary. And by "necessary", I don't mean "it makes it easier to write code that treats the object as a struct". I mean "setting this field really is an action that this object has to expose to the external world in order to function properly". And not even "necessary" because I coded myself into a corner and that's the easiest way out I see. It needs to be necessary at the design level, not at the code level.


It depends, most of the time is better to have separate functions that transform your data rather than have methods and state conflated together. But obviously it depends from the context.


Yeah there are no hard and fast rules but a lot of time transformations can be in the object as well. If I need a function to transform Foo to Bar I could just as easily send a toBar() message to an instance of Foo.


I think c# really got the best of both worlds with extensions methods, where you can actually define functions that act on a object but are separated from the actual class definition. I still think that pure functions and especially higher kinded types are better probably, although I have no direct experience with Haskell type classes, scala implicits and ocaml modules..


It's not exactly a bad thing, it's just that you're using a hammer (class) when what you actually need is a screwdriver (struct/record).

Abusing getters/setters is breaking encapsulation (I said abusing, light use is ok). If you're just going to expose all the innards of the class, why start with a Class?

The whole point of object orientation to put data and behavior together. That's probably the only thing that both the C++/Java and the Smalltalk camp agrees on.

Separating data and the behavior into two different classes breaks that. You're effectively making two classes, each with "half of a responsibility". I can argue that this breaks SRP and the Demeter principle in one go.

Another thing: Abuse of getters/setters is often a symptom of procedural code disguised as OOP code. If you're not going to use what is probably the single biggest advantages of OOP, why use it at all?

-

Here's an answer that elaborates on this that I like:

https://softwareengineering.stackexchange.com/questions/2180...


> The whole point of object orientation to put data and behavior together

May I politely disagree based on my long-ago experience with dylan, which has multi-methods (<https://en.wikipedia.org/wiki/Multimethods>). This allowed the action on the data (the methods) to be defined separate from the data. I strongly feel that it was OO done right, and it felt right. You can read about it on the wiki link but it likely won't click until you play with it.

I'd like to give an example but it's too long ago and I don't have any to hand, sorry.


It’s a different semantic in my opinion. Even in mutable objects it’s better to have setters that act only on the field that they are supposed to mutate and do absolutely nothing else. If you need a notification you can raise an event and then the interested parties will react accordingly. By mutating directly an unrelated field, or even worse, call an unrelated method that brings complete havoc to the current object state, in the setter you are opening yourself to an incredible amount of pain.


I disagree, slightly. A setter (or any method, for that matter) has to keep the object in a consistent state. If it can't set that one field without having to change others, then it has to change others.

Now, if you want to argue that an object probably shouldn't be written in the way that such things are necessary, you're probably right. And if you want to argue that it should "just set the one field in spirit" (that is, that it should do what it has to to set the field, but not do unrelated things), I would definitely agree with you. But it's not quite as simple as "only ever just set the one field".


> Getters/setters are technically message-passing methods, but they undermine the design goal because they more or less directly expose internal state to the public world.

No, they don't, because “more or less” is not actually directly. Particularly, naive getters and setters can be (and often are) replaced with more complex behavior with no impact to consuming code because they are simply message handlers, and they abstract away the underlying state.


> No, they don't, because “more or less” is not actually directly.

I disagree.

Consider a `Counter` class, intended to be used for counting something. The class has one field: `Counter.count`, which is an integer.

A setter/getter for this field would be like `Counter.setCount(i: Int)` and `Counter.getCount() -> Int`. There is no effective difference between using these methods and having direct access to the internal state of the object.

A more "true OOP" solution would be to use methods with semantic meaning, for example: `Counter.increment()`, `Counter.decrement()`, and `Counter.getCount() -> Int`. (Yes, the getter is here because this is a simple example.) These kinds of methods are not directly exposing the internal state of the object to be freely manipulated by the outside world.

If your getter/setter does something other than just get/set, then it's not really a getter/setter anymore — it's a normal method that happens to manipulate the state, which is fine. But using getters/setters (in the naive, one-line sense) is commonplace with certain people, and I feel that their use undermines the principles Kay was getting at.


I have seen side effects for completely unrelated fields in setters. Heck, I’ve even witnessed side effects in bloody getters. This is the reason why now I’m a huge fan of immutable objects. Actually nowadays I became a fan of functional languages with first class immutability support.


> but they undermine the design goal because they more or less directly expose internal state to the public world.

This has always been my problem with getters and setters. It's a way of either pretending you are not or putting bandaids on the fact that you're messing with the objects internal state. For objects with dynamic state this is really bad. The result is racy or brittle.


> Getters/setters are technically message-passing methods, but they undermine the design goal because they more or less directly expose internal state to the public world

If they do, that's your fault for letting them. I guess you mean when people chain stuff thus

company.programmers.WebDevs.employ('fred')

where .programmers and .WebDevs is an exposed internal of the company and programmers department respectively? (I've seen lots of this, and in much longer chains too. We all have). In which case please see the Principle of Demeter <https://en.wikipedia.org/wiki/Law_of_Demeter> which says don't do this. Wiki article is good.

I doubt any language can prevent this kind of 'exposing guts' malpractice, it's down to the humans.

I remember reading that Alan Kay said when he saw the Linda model (<https://en.wikipedia.org/wiki/Linda_(coordination_language)>) he said it was closer to what he wanted smalltalk to be.


> I doubt any language can prevent this kind of 'exposing guts' malpractice

Actually, true OOP languages do prevent this. Internal state is completely private and cannot be exposed externally. The only way to interact with an object's state is through its methods — which means the object itself is responsible for knowing how to manipulate its internal state.

Languages like Java are not "true" OOP in this sense, because they provide the programmer with mechanisms to allow external access to internal state.

Internal state should be kept internal. You shouldn't have a class `Foo` with a private internal `.bar` field and then provide public `Foo.getBar()` and `Foo.setBar()` methods, because you may as well just have made the `.bar` field public in that case.

Also, FWIW, I did not downvote you. I dunno why you were downvoted. Seems you had a legitimate point here, even if I disagree with it.


> Internal state should be kept internal.

I'm not sure that's a proven model. It's a proposed model, for sure. Since you can't protect memory from runtime access, you can't really protect state, so it's a matter of convention which Python cleverly baked in (_privatevar access).


Ah sorry, I was speaking in the context of Kay's OOP! In retrospect my phrasing made it seem like I was stating an opinion as fact, but what I meant was just that Kay's OOP mandated that internal state could not be exposed and was very opinionated on the matter.


Why downvoted? I don't mind being wrong but would like to know where and why.


When I think of message passing, I think of message queues. There should be an arbiter, a medium of message passing so you can control how that message is passed and how it will arrive.

Java and C++ way of message passing both stripped that medium down to a simple vtable to look up what methods the object has. Erlang and go have the right idea of passing messages through a medium that can serialize and multiprocess it. C# tries to do with further abstractions like parallelized linq queries and C#, python and nodejs use async/await to delegate the messages to event queues. Python can also send messages to multiple processes. All this shows us that message passing requires a medium that primitive method calls lack.


>both stripped that medium down to a simple vtable to look up what methods the object has.

If they use vtable it'd be just slow. Not needing the trampoline and ability to inline harder is what makes it fast. The usual case is class hierarchy analysis,static calls (no more than a single implementer proven by the compiler), guarded calls (check +inline, java deoptimizes, if need be), bi-morphic call site inline, inline caches and if that fails - the vtable thing.

Message passing in a classical way is just awfully slow for a bottom of the stack building block. It doesn't map to the hardware. It does makes sense for concurrency with bounded, lock free queues (actor model). But at some point, someone has to do the heavy lifting.


I suppose C++-style method calls are a limited form of OO, without asynchronicity, running in independent threads when required, no shared state, ability to upgrade or restart a failed component...


No, it does not have to be async. My impressions from using Squeak regarding this matter:

1. You can send any message to any object. In case the object does not have a suitable handler, you will get an exception: <object> does not understand <message>. The whole thing is very dynamic.

2. There is no `static` BS like in c# or java. This is because each method has to be a method of an object. For each class there is a metaclass which is an object too, see: https://en.m.wikipedia.org/wiki/Metaclass#/media/File%3ASmal...


You can implement Smalltalk like patterns in C# via dynamic types and expression trees.


> "The notion of object oriented programming is completely misunderstood. It's not about objects and classes, it's all about [function calls]."


> It really should be noted that years later Joe changed his mind about OO and came to the realization that perhaps Erlang is the only object-oriented language :)

But not in the way he's describing OO in his blog post. He's talking about a language with functions bound to objects and where objects have some internal state. The OO he's describing does not have isolation between objects because you can share aliases freely; references abound.


Nobody can agree on what OOP really is. I've been in and seen many long debates on the definition of OOP. It's kind of a like a Rorschach test: people project their preferences and biases into the definition.

Until some central body is officially appointed definition duty, the definition debate will rage on.


Is this different from ANY other concept in technology? Personal Computing, Big Data, Cloud Computing, Deep Learning, Artificial Intelligence? We never have real definitions for any of these, and if you attempt to make one it will be obsolete before you finish your blog post.

The only real problem I see is that too many technologist insist that there is 'one definition to rule them all' and it's usually the one they most agree with. As long as we all understand that these terms are fluid and can explain the pro's and con's of our particular version we will be fine.


The OO languages we are using should be called class- oriented instead object-oriented.


Mutation-oriented, or maybe obfuscation-oriented


Simula, Smalltalk and CLOS have plenty of mutations.


If pretty much every single implementation of OO languages misunderstood Kay that just means Kay either didn't explain himself well or OO as he intended it is so easy to misunderstand it's almost useless as a programming paradigm. At this point, it really doesn't matter anymore. OO is what OO languages like C++ and Java have made it. The original author in no way has a monopoly or even a privileged viewpoint in the matter. And frankly, I agree with the original article. OO is very poor and leads to a lot of misunderstandings because it has a lot of problems in its core design. It "sucks." It never made much sense to me and clearly it never made much sense to even the people designing languages such as C++ or Java because it's taken decades to come up with somewhat useful self-imposed limitations and rules on how to use OO to not come up with an ugly mess. It's completely unintuitive and out of the box misleads just about every beginner who tries to use it. A programming paradigm should make it obvious how it's supposed to be used but OO does the opposite. It obfuscates how it should be used in favor of paradigms like inheritance that lead users down a path of miser and pain due to complexity and dead ends that require rewriting code. In most cases, it's mostly a way to namespace code in an extremely complicated and unintuitive manner. And we haven't even touched the surface as to its negative influences on data structures.


So, microservices are another attempt at emulating a good pattern with a huge pile of bad ones? :)


Both Alan Kay and Joe Armstrong struck me as having had the same attitude of trying to capitalize on the topic of object oriented programming, failing to recognize its importance, and then later trying to appropriate it by redefining it.

Not the best moment of these otherwise two bright minds.


Didn’t Alan Kay coin the term “object oriented”?


I don't believe he claims to, no.

He coined the term “object,” but what he meant by a computational object was different than what it came to mean: a data structure with associated operations upon it. Kay meant a parallel thread of execution which was generally sitting in a waiting state—one could make a very strong analogy between Smalltalk's vision of “objects” and what we call today “microservices,” albeit all living within the same programming language as an ecosystem rather than all being independent languages implementing some API.

But whether this is an “object-oriented” vision depends on whether you think that an object is intrinsically a data structure or an independent computer with its own memory speaking a common API. The most visible difference is that in the latter case one object mutating any other object's properties is evil—it is one computer secretly modifying another’s memory—whereas in the other case it is shrug-worthy. But arguably the bigger issue is philosophical.

That is hard to explain and so it might be best to have a specific example. So Smalltalk invents MVC and then you see endless reinventions that call themselves MVC in other languages. But most of these other adaptations of MVC have very object-oriented models: they describe some sort of data structure in some sort of data modeling language. But that is not the “object” understanding of a model in Smalltalk. When Smalltalk says “model” it means a computer which is maintaining two things: a current value of some data, and a list of subscribers to that value. Its API accepts requests to create/remove subscriptions, to modify the value, and to read the value. The modifications all send notifications to anyone who is subscribed to the value. There is not necessarily anything wrong with data-modeling the data, but it is not the central point of the model, which is the list of subscribers.

A more extreme example: no OOP system that I know of would do something as barbarous as to implement a function which would do the following:

> Search through memory for EVERY reference to that object, and replace it with a reference to this object.

That just sounds like the worst idea ever in OOP-land; my understanding of objects is as data structures which are probably holding some sort of meaningful data; how dare you steal my data structure and replace it with another. But Smalltalk has this; it is called Object.become. If you are thinking of objects as these microservicey things then yeah, of course I want to find out how some microservice is misbehaving and then build a microservice that doesn't misbehave that way and then eventually swap my new microservice in for the running one. (That also hints at the necessary architecture to do this without literally scanning memory: like a DNS lookup giving you the actual address, every reference to an object must be a double-star pointer under the hood.) And as a direct consequence, when you are running Smalltalk you can modify almost every single bit of functionality in any of the standard libraries to be whatever you need it to be, live, while the program is running. Indeed the attitude in Smalltalk is that you will not write it in some text editor, but in the living program itself: the program you are designing is running as you are writing it and you use this ability to swap out components to massage it into the program that you need it to become.


I didn't coin the term "object" -- and I shouldn't have used it in 1966 when I did coin the term "object-oriented programming" flippantly in response to the question "what are you working on?".

This is partly because the term at the time meant a patch of storage with multiple data fields -- like a punched card image in storage or a Sketchpad data-structure.

But my idea was about "things" that were like time-sharing processes, but for all entities. This was a simple idea that was catalyzed by seeing Sketchpad and Simula I in the same week in grad school.

The work we did at Parc after doing lots of software engineering to get everything to be "an object", was early, quite successful, and we called it "object-oriented programming".

I think this led to people in the 1980s wanting to be part of this in some way, and the term was applied in ways that weren't in my idea of "everything from software computers on a network intercommunicating by messages".

I don't think the term can be rescued at this point -- and I've quit using it to describe how we went about doing things.

It's worth trying to understand the difference between the idea, our pragmatic experiments on small machines at Xerox Parc, and what is called "OOP" today.

The simplest way to understand what we were driving at "way back then" was that we were trying to move from "programming" as it was thought of in the 60s -- where programs manipulated data structures -- to "growing systems" -- like Smalltalk and the Internet -- where the system would "stay alive" and help to move itself forward in time. (And so forth.)

The simplest way to think about this is that one way to characterize systems is that which is "made from intercommunicating dynamic modules". In order to make this work, one has to learn how to design and maintain systems ...


Oh wow.

I was really not expecting you to join this conversation and I am very thankful to have crossed paths with you, even so briefly. Sorry for getting you wrong about the “objects” vs. “OOP” thing.

I have thought you could maybe call it “node-oriented” or “thread-oriented” but after reading this comment I think “ecosystem-oriented” might be more faithful a term?


Alan suggested "server-oriented programming" in Quora:

https://www.quora.com/What-is-Alan-Kays-definition-of-Object...


I think the inspiration from Simula I is something a lot of folks either don't know about, or maybe they know about it but don't recognize its significance. Objects with encapsulated state that respond to well-defined messages are a useful level of abstraction for writing simulations of the sort Simula was built for. They're just not automatically a particularly wieldy abstraction for systems that aren't specifically about simulation. Some (most?) of that is about the skill of the programmer, imo, not some inherent flaw in the abstraction itself.

P.S.: Thank you for all your contributions to our profession, and for your measured response to these kinds of discussions.


It's out of the context of this thread, but we were quite sure that "simulation-style" systems design would be a much more powerful and comprehensive way to create most things on a computer, and most especially for personal computers.

At Parc, I think we were able to make our point. Around 2014 or so we brought back to life the NoteTaker Smalltalk from 1978, and I used it to make my visual material for a tribute to Ted Nelson. See what you think. https://www.youtube.com/watch?v=AnrlSqtpOkw&t=135s

This system --including everything -- "OS", SDK, Media, GUI, Tools, and the content -- is about 10,000 lines of Smalltalk-78 code sitting on top of about 6K bytes of machine code (the latter was emulated to get the whole system going).

I think what happened is that the early styles of programming, especially "data structures, procedures, imperative munging, etc." were clung to, in part because this was what was taught, and the more design-intensive but also more compact styles developed at Parc seemed very foreign. So when C++, Java, etc. came along the old styles were retained, and classes were relegated to creating abstract data types with getters and setters that could be munged from the outside.

Note that this is also "simulation style programming" but simulating data structures is a very weak approach to design for power and scaling.

I think the idea that all entities could be protected processes (and protected in both directions) that could be used as communicating modules for building systems got both missed and rejected.

Of course, much more can and should be done today more than 40 years after Parc. Massive scaling of every kind of resource requires even stronger systems designs, especially with regard to how resources can be found and offered.


Are you the Alan Kay. Is there any way we can verify this is you? The HN user account seems to have a very low "karma" rating, so one can't help but be more suspicious.


I'm the "computing Alan Kay" from the ARPA/Parc research community (there's a clarinettist, a judge, a wrestler, etc.) I did create a new account for these replies (I used my old ARPA login name).


It's really cool that you weigh in on discussions on HN. Or I suppose it feels like that to me primarily because I grew up reading your quotes in info text boxes in programming texts. And it's cool to have that person responding to comments.


It’s a new account created yesterday. Alan Kay did an AMA here a while back+ with the username “alankay1” and occasionally posted elsewhere. That account’s last post was 7 months ago. Given that user “Alan-1”s style and content is similar, it seems likely that he created a new account after half a year away from HN.

If you want verification, maybe you can convince him to do another AMA =) I’m still thinking about his more cryptic answers from the last one, which is well worth a read. I think that was before Dynamicland existed, but I may be off.

+ https://news.ycombinator.com/item?id=11939851


That's what he claims but there's zero evidence besides his own word.


Well, how large is the pool of other possible candidates? Wouldn't someone from that time period (say the Simula folks, or another PARC employee) challenge that assertion? Why would he like?


Ah, so you have evidence that it was somebody else?


I don't, but that's not how the burden of proof works.


Every source I've ever come across on this topic (and I work in PL research) points to Kay as the originator of the term "object-oriented" in relation to programming. No exceptions.

You are now making an affirmative assertion that Alan Kay did not coin the term. The burden of proof is on you, not him.


Link these sources, then! Even someone who recently interviewed him and researched the subject for months confessed he could never corroborate that claim.

You make the claim he coined the term, the burden of proof is on you.

Until you do, it's perfectly reasonable and intellectually honest to reject that claim.


Sure, it's impossible to corroborate at this point because there's no direct evidence of it. It's not like he wrote it in a mailing list that we still have access to. It was (according to what I've read about it) a verbal statement made in response to a question asked of him by someone else. I don't know who the other person is, though perhaps that would be a place to look.

References I've seen have, of course, essentially all pointed back to Kay's claims. I imagine this is insufficient in your eyes, so I won't bother finding them for you.

Arguing "it's reasonable and intellectually honest to reject [the claim that Kay coined the term]" is silly. It's not reasonable, because there's no real reason to suspect the claim to be false in the first place. For 50+ years it has been accepted knowledge that Kay coined the term. Nobody — including people with direct experience on the same teams or with otherwise opposing claims — has stepped forward to dispute this fact in all that time. This would be just like saying "Well I don't think da Vinci really made the Mona Lisa. I mean, all we have is his word for it. Sure, the painting didn't exist before him, and its existence appears to have started with him, and people at the time attribute its existence to him, but for all we know maybe somebody else did it and gave it to him to use as his own!" Sure, it's possible... but it's a silly claim to make (and hence not reasonable).

Your position is not "intellectually honest" because it sincerely looks like you're just trying to be antagonistic. What's the point in arguing that Kay didn't coin the term? Do you have some unsung hero in mind you'd like to promote as the coiner? Or do you just like arguing against commonly-held beliefs for the sake of it? I don't see what you're trying to accomplish.

Two more thoughts:

1. The only way to prove Kay didn't originally coin the term would be to find hard evidence of it used in a similar fashion (i.e., with regard to programming) from prior to 1966 (the time Kay claims he invented the term).

2. If you had such evidence, you would need to prove that Kay had seen it prior to his alleged coinage. In the absence of such proof, the existence of the term prior to Kay's use would be irrelevant. Why? Because the community as a whole has gone off of Kay's claim for the whole time. If somebody else conceived of "object-oriented programming", we didn't get it from them — we got it from Kay.


Alan Kay responded above so...


Link at [0].

I'm a little skeptical. That user certainly writes in a similar style to how I've seen Alan Kay write online, but I wouldn't be opposed to seeing some more proof. A one-day-old HN account claiming to belong to one of the most important people in CS from the past 50 years seems a little suspicious haha.

[0] https://news.ycombinator.com/item?id=19717640


An interesting and unfortunately true commentary on the lack of civilized behavior using technology that actually required a fair amount of effort -- and civilized behavior -- to invent in the first place.


Yeah, it's definitely disappointing that we have to worry about things like that, but that's the nature of the beast I guess. I hope you don't take any offense at my skepticism! For what it's worth, I'm happy assuming you're the real deal because being a cynic all the time is no fun and I have no specific reason to believe otherwise at the moment; I just also wouldn't be surprised to discover it's fake haha.

Also, I walk by your face a few times a week whenever I head into my lab. MEB has redecorated a few times over the years, but they always have a section of pictures of notable alumni and (of course) you're up there. Thanks for giving us a good name in the field and for all you've done!


Merrill Engineering Building! I'm glad it is still around. Those long hallways were used as a "display" to unroll the many pages of Simula machine code listings down one corridor so that three grad students -- including me -- could crawl over it and coordinate to try to understand just what Simula might actually be (the documentation in Norwegian that had been transliterated into English was not understandable).


I'd be really interested to hear what you think they missed, because I find your claim to be surprising and a bit preposterous.


Armstrong wrote the very famous "Why OO sucks" and then a decade or two later, changed his mind when he saw how successful OO was, and then tried to retrofit Erlang into an OO language. Not by changing Erlang, but by twisting the definition of OOP so that Erlang would fit it.


That isn't what happened at all (see the rebuttal by revvx). Joe was a great guy and also a great systems thinker. And he was the last person to worry about "bandwagons" (quite the opposite!)


I don't think that's what happened.

Joe Armstrong was criticizing C++-style OOP when he wrote his critique.

After he learned more about Alan Kay's view on OOP, he decided that Erlang is closer to Alan Kay's OOP and he approves that specific flavor of OOP.

He didn't change his stance based on popularity. He changed his stance stance because in the 80s/90s the term "OOP" was synonymous with C++-style-OOP, but that changed in the 2000s thanks to 1) C++-style OOP criticism became commonplace in our industry (thanks to people like Joe Armstrong) and 2) an increase of popularity languages like Ruby and Objective-C (which are closer to Smalltalk) and even much-maligned concepts such as DCOM, SOA and CORBA.


He doesn't even mention C++ in his essay [1], but regardless, the C++ OOP is pretty much the mainstream OOP, which we still use today in Java, Kotlin, C#, etc...

And... no, the change in mindset about OOP never happened. Kay and Armstrong's view of OOP never took on. Today, OOP is still not seen as message passing and mostly seen as polymorphism, parametric typing, classes/traits/interfaces, and encapsulation. The complete opposite of what Erlang is.

[1] http://harmful.cat-v.org/software/OO_programming/why_oo_suck...


I'm the one mentioning C++. To anyone familiar with both styles, Joe Armstrong is clearly not talking about Smalltalk-style OOP in his essay, he's talking about C++/Java/etc style. And later on he only praised Smalltalk-style OOP.

And sorry, by a "change in mindset in our industry regarding OOP" I mean that it became commonplace to criticize C++-style OOP. Not that everyone stopped programming in that style. Maybe there's a better way to phrase it?


"seen as" is the key here. "The masses" ultimately usually get to define terms, for good or bad. The gestalt or "feel" of what OOP "is" is often shaped by common languages and their common usage, again for good or bad.

It may be better to define specific flavors or aspects of OOP or OOP-ish things and discuss them in isolation with specific scenarios. That way the messy issue of canonical definition(s) doesn't come into play as often.

It would then be more of "hey, here's a cool feature of Language X or System X! Look what it can do...". Whether it's canonical or not is then moot.


Again, please do your homework.


This is way off. Please try to do more homework.


Well, I disagree with 99% of this... I'm a guy that started with C, moved to functional programing, added C++, and now do all 3.

> Objection 1. Data structure and functions should not be bound together

Well, in my experience, in every almost every code-base (either from functional, or imperative programing), we end up with modules, witch are a set of function taking the same type as a parameter. This is very close to binding the functions and the types...

> Objection 2. Everything has to be an object.

I don't get the example. The only thing that this show, is the benefits of having a range type built in the language. Then it's just type aliases.

"There are no associated methods.", yes, but you will need functions to manipulate those types (just translate one type into another), at the end, it's going to a module, which is almost an object.

> Objection 3. In an OOPL data type definitions are spread out all over the place.

That's true. It also makes thinking about the data layout complex. That's why other paradigm have been developed (DOP), on top of OOP. Now you can also think that having those defined together makes dependency management easier.

> Objection 4. Objects have private state.

False. Objects can have a private state. This a problem with mutability, not oriented object programing. You can have non mutable OOP.

> Why was OO popular?

>> Reason 1. It was thought to be easy to learn.

The past 20 years have shown how easy it is. In fact, I actually think it's too easy, people rely too much on abstraction, without even trying to understand what's going on. I my opinion, it promotes a lazy mindset (This is my biggest criticism about OOP).

>> Reason 2. It was thought to make code reuse easier.

I would like an evidence that it's not.

>> Reason 3. It was hyped.

True, but that does not make it bad. People tried to hype every technologies... Some stayed, some went away.

>> Reason 4. It created a new software industry.

How has OOP created a software industry that would not have existed if functional programing had "won the fight"?


Upvoted because it's well-articulated, even though I disagree.

> Well, in my experience, in every almost every code-base (either from functional, or imperative programing), we end up with modules, witch are a set of function taking the same type as a parameter. This is very close to binding the functions and the types...

There is a key distinction: If I have two subsystems that use the same data in different ways, I can keep those concerns separate by putting the functions for each concern into a different module. Binding all the functions to the type mixes the concerns together and creates objects with way too much surface area.

Also, most OO langs make a big ceremony out of each new type: create the class file, create the test file, blah blah blah. I want types to be cheap so I can make them easily and capture more meaning with less work.


> Upvoted because it's well-articulated, even though I disagree.

Appreciate it :)

> There is a key distinction: If I have two subsystems that use the same data in different ways, I can keep those concerns separate by putting the functions for each concern into a different module. Binding all the functions to the type mixes the concerns together and creates objects with way too much surface area.

This is where composition helps. Now, historically, indeed OOP programmers have not been the best at using composition. Now, looking at more recent projects, this has got a lot better.

> Also, most OO langs make a big ceremony out of each new type: create the class file, create the test file, blah blah blah. I want types to be cheap so I can make them easily and capture more meaning with less work.

Totally agree with that, the ability to define a type in one line and have it reflected though the entire code base through type inference is the one thing that I miss the most in C/C++.


> This is where composition helps.

It does, though in my experience it leads you down a path that ends in some pretty strange names, as you nominalise more and more nebulous concepts, trying to verb in the kingdom of nouns.


Is that any different from foldl, foldr, reduce, map? If you have a generic data type you want your operators to be generic, regardless of whether they exist as methods or as separate functions. The only difference is that the object is free to not leak internal implementation details.


> > Also, most OO langs make a big ceremony out of each new type: create the class file, create the test file, blah blah blah. I want types to be cheap so I can make them easily and capture more meaning with less work. > Totally agree with that, the ability to define a type in one line and have it reflected though the entire code base through type inference is the one thing that I miss the most in C/C++.

FWIW, I think that this is what distinguishes object-oriented programming as a language paradigm from object-oriented programming as a design paradigm: If you're going to say that all data types should have the operations you can perform on them bound up together into a single class (or class cluster), then that would imply that small, cheap data storage types are expected to be few in number.

If, OTOH, it's more about modularity, and you're not so concerned about how things happen on the sub-module level, then that gives more ideological space for code that's, for example, functional in the small scale and object-oriented in the large scale, like Erlang. Or procedural in the small scale and object-oriented in the large scale, like some C++ code.


I pretty much agree with your statements, but I'd like to take a stab at:

>> Reason 2. It was thought to make code reuse easier. > I would like an evidence that it's not.

Mainstream OOP approaches achieve better cohesion by coupling data structures to functions. In the worst case you end up with essentially modules that contain "global variables" local to that module. In other words the only reason to have your instance variables is to remove the need to pass those variable to the functions as parameters.

This hurts the ability to write generic code. In fact you see this problem all the time in OO code. You have a base class and a bunch of basically unrelated child classes. It's not so much that the child ISA base, it's more that the child ACTS_AS_A base. But then, you run into all sorts of problems because one child (because it is using very different data structures) requires specialised code.

There are ways of getting around this, but often those ways end up encouraging you to implement an alphabet soup of design patterns that interact with each other -- causing more coupling rather than less. All for the want of a generic function.

IMHO OO is actually a poor vehicle for achieving code reuse. In fact, aiming towards this goal is usually one of the root causes I find in really poor OO designs. What OO is really good at is separating concerns and building highly cohesive code. This sometimes comes at the cost of increased coupling which inherently reduces reusability. I don't actually think that's a bad thing when used appropriately, but the old school "OO creates reusable code" is just a bad idea IMHO. It's the kind of thing that several of us threw out the window in the 90's along with large inheritance hierarchies -- nice idea, but didn't work out in practice.


> Objects can have a private state. This is a problem with mutability, not oriented object programing. You can have non mutable OOP.

Wouldn't this violate the "encapsulation" pillar of OOP? As far as I know, it's always taught with encapsulation, inheritance, polymorphism being its three pillars.

> How has OOP created a software industry that would not have existed if functional programing had "won the fight"?

I'm not sure functional programming has lost yet. I haven't worked with it personally, and so can't speak to its merits or demerits, but have heard a lot of buzz around it recently. As you said, people tend to hype everything; some stay and some go. It might be the next big thing in programming, or it might be hipster tech. Or, like most things, it might have some good applications, but not be applicable to everything. That's basically my argument for OOP.


> Wouldn't this violate the "encapsulation" pillar of OOP? As far as I know, it's always taught with encapsulation, inheritance, polymorphism being its three pillars.

Encapsulation is "if you have a state, you should encapsulate it". It does not ask you to have a state (even less a mutable one). I quite often use object to represent a logical piece of code, without any attributes.

> I'm not sure functional programming has lost yet. I haven't worked with it personally, and so can't speak to its merits or demerits, but have heard a lot of buzz around it recently.

As much as i really enjoy FP, I don't think it has more than 1% of the market share of software engineering. And I've been hearing the "heard a lot of buzz around it recently" for more than 10 years.


The question isn't if OOP or FP will win, but what mix of both is best. A lot of old OOP languages have added features that move them more towards FP. Out of the top of my head C# got E.g. extension methods, lambdas and many ways to be less mutable or pass multiple values around. The bit of programming history i was allowed to experience most definitively became more functional.


This is exactly it. The question is not "which will take over the world". Both can contribute useful features in various situations, and this is why I like languages that let me use what is best in each situation.


> Objects can have a private state. This a problem with mutability, not oriented object programming.

That seems to be the crux of it - I remember reading a post by Paul Graham where he said something similar about object oriented programming. His core thesis seemed to be that functional programming does a better job of organizing things than object-oriented programming does, and once you have the core of functional programming in place (closures, first-class functions, whatever the hell a "monad" is), you don't need object orientation any more. I've never gotten deep enough into pure functional programming to really see things this way, but I've gotten deep enough to at least understand why these pure FP guys might think that.


>Well, in my experience, in every almost every code-base (either from functional, or imperative programing), we end up with modules, witch are a set of function taking the same type as a parameter. This is very close to binding the functions and the types...

And the world is not so neatly divided between things that just are (data structures) and thing that do things (functions). Take a date for example. The fact that it is a Wednesday is a "just is" sort of thing but is typically implemented as a function.


I'm not sure I'd agree that it being a Wednesday is a "just is" sort of thing. The point in time is a data point (on the time axis, if you will). A function then needs to place it in a calendar.

FWIW I'm struggling to come up with a good example of where the line between data and functions is clearcut, except perhaps when the data describes a function: an SQL string, some code that'll get eval'ed, etc.


>The point in time is a data point (on the time axis, if you will)

You need some way to place in on that axis, though. Commonly we use Day, Month, and Year to do so. But we could also define a date as the seventh Wednesday in 2019. Or as an integer relative to Jan 1 1970.


Everything sucks, they just all suck differently.

I still think OO provides a pretty easy mental framework for programming. You can get good results. Bit of discipline without going crazy and it works really effectively. Despite its shortcomings.


I don't think what I'm about to say is necessarily inherently true, but it reflects how things seem to work in practice:

It seems to me that part of the problem is that OO doesn't force you to have discipline and/or without constant vigilance (which product owners are never willing to schedule for) the system inevitably gets out of control over time.

On the other hand, it seems to me that the core principles of functional programming (immutability, functional purity, construction via composition, and declarative programming) serve as a check that prevents things from getting out of control.

That being said, I think it's worth considering that all of the "core principles" of FP that I mentioned could be incorporated into OO. It also seems like FP can be more prone to out of control syntax (e.g. unwise use of point free style).


Forces discipline is a double edged sword. It can help you keep things clean and understandable, but it can also limit things. For example: I am pretty good with react and redux, but I was more productive with JQuery. JQuery doesn’t have the forced discipline of react+redux, but it gets the job done. At the same time, I’ve seen larger amounts of crap JQuery code than React+Redux.

The real issue: discipline has to be learned, often by experience, it isn’t forced. If you force it, then no one knows why things are they way they are.


> If you force it, then no one knows why things are they way they are.

It's more accurate to say "if you don't teach why' then no one knows why". Forcing or not has nothing to do with it.

I would argue "forcing" is strictly better, since learning discipline requires experience in doing things in every other wrong way. While that's great for learning on your own, I wouldn't want developers to "learn discipline" like this in production.


I wish more languages would let you do stuff like mark reference parameters to methods as unable to be changed or reassigned within the method, get a readonly reference to a list without having to make a copy, that sort of thing. It doesn't have to be forced, just give me the option so I can get a guarantee on something if I want to.


If you mark all fields and variables in Java as final, you get pretty much this experience?

If I could go back in time and unilaterally make one change to Java, it would be to make `final` default. But if you just get in the habit of using it (the IDE helps), non-final variables look broken. And once objects have all-final fields, immutability just starts spreading upward in your code.


I’m a bit surprised they haven’t copied Scala’s case class, where you get immutable fields by default and helpers to copy with updated fields, along with a sane hash and equality implementation. Making immutability easier than the alternative makes a huge difference in practice.


Unfortunately unlike C++ etc constness doesn't propagate inside objects marked final. So

    final List x = new ArrayList()
stops x being reassigned but does nothing to stop the downstream code from mutating the contents of the list.


final List<?> immutable = Collections.immutableList(x)... and pass it around.

Never expose your collections directly (unless you are willing to go copy on write and immutable objects inside the collections, the latter is very welcome, though)


Also, if you want to go 'whole hog' you can use any of the functional/immutable collection libraries like vavr, functionaljava, or jimmutable.


Rust


Like the 'const' parameter in Free Pascal/Delphi: http://wiki.freepascal.org/Const#Const_Parameter


Rust is immutable by default, optionally mut, and you lend different kinds of references & (immutable-- can have many) &mut (mutable reference-- must be unique)


Just my personal anecdote. The only functional programming languages I have extensive experience with are C and JS. I have NEVER seen a sensibly organized or maintained medium to large sized C or JS application. Every time it's been total chaos. In Java projects, there's a 50/50 shot of it being moderately sensible. I'm confident other people have completely different experience. Based on your post it sounds like you have had a different experience, and I have no trouble believing that. Anyways, just my $0.02.


In this context, C and JavaScript would not be considered functional. They have functions, but that's not what most people mean by "functional". While it's possible to restrict yourself to a functional subset in both of them, they would typically fall into the "imperative" category.

Imperative programming languages (like C and JavaScript) don't generally impose any discipline on the users.


> Imperative programming languages (like C and JavaScript) don't generally impose any discipline on the users.

I think this view disrespects history a little. C was one of the earliest languages to be conceived upon a foundation of structured programming principles, i.e. block structure, sequence/selection/repetition, subroutining. (Okay the language still has goto, hopefully we can agree to not make a big deal out of that.) The kind of discipline proposed by structured programming was far from universally well-received at the time, and I think it's fair to say that it lead to huge improvements in the quality of codebases everywhere, and is one of the great successes of programming language thinking of the '70s.

C is also statically typed. It's obviously easy to blow all sorts of holes in C's type system, but if you'd go so far as to say that C's types impose no discipline at all, I'd ask you to try teaching C to a room full of compsci students who have been raised on something like Python.


> I think this view disrespects history a little.

I love C, but I'm hard pressed to think of any language other than assembly which is more willing to get out your way if you so much as nudge in an undisciplined direction. C certainly does not impose much discipline on programmers, but it allows them to bring some if they walk the line.

> Okay the language still has goto, hopefully we can agree to not make a big deal out of that.

> It's obviously easy to blow all sorts of holes in C's type system

Yes, I'm willing to politely ignore evidence which refutes your point. I mean you were kind enough to make my case for me. :-)

> I'd ask you to try teaching C to a room full of compsci students who have been raised on something like Python.

Dynamic vs static typing seems orthogonal to what we're talking about here, but maybe I'd have to think about that more. Python just waits to catch your type errors until runtime. Comparing that to JavaScript in a browser which silently ignores your errors (or happily performs crazy type conversions), Python seems disciplined in comparison.


Kind of true, however C's type system is very weak versus what 60's Algol and PL/I variants were capable of.


I think this might be a little unfair to JavaScript. No doubt it's a multi-paradigm language, but there's a huuuge FP community in JS, and loads of code bases (esp. in React) are written in a very functional style. Definitely not comparable to passing function pointers around in C.


Almost everyone I've come across who writes JS in a "functional style" has done it because they've had prior experience using a FP language and are applying those lessons to JS. I believe the OP is talking about those FP languages, such as Haskell and Clojure, which have a very different programming experiences.

While JS supports it in many ways, it's still not a style inherent in a multi-paradigm language like Javascript nor is it (really) the primary style in popular frameworks - despite the fact inspiration from FP languages/libraries has been increasingly common in popular frontend frameworks.

Additionally, even if you go full-bore FP on JS, it's still not the same. Almost no one goes full-bore FP in JS because it really doesn't make sense to nor is it an easy thing to do.

(And I say all of this as a frequent user of https://ramdajs.com)


You can get maybe 80% of the way there, but non-FP dependencies can still hurt you. That's also a problem with clojure dependencies on Java libraries, but less so because at least the clojure ecosystem mostly buys into the FP paradigm.


Definitely don't disagree with anything you say, just don't think the characterization of JS as a strictly imperative language is fair -- especially when compared with C.


Yup. So many good FP => JS langs too.


FP is different things to different people. For me, it's pure functions and a preference for purely functional data scructures (Okasaki style).

Also to me, FP has an emphasis on avoiding mutating state. The second example on the React front page shows how to mutate state, and then they just build from there. I don't use React, but looking at those examples, it all looks very OO to me.


React with class-components is very OOP.

Hooks are closer to functional programming, though (but were released only a couple months ago)


If we are talking languages of certain paradigms imposing discipline then JS not being a language conceived or further developed with FP in mind is not one of those, regardless of how folks use it.


I think you're confusing procedural with functional. Modern Javascript has some support for functional idioms, but C is as far from functional programming as you get (nothing wrong with that, of course)


The key part of being a good programmer is knowing what you don't know


Agreed. I remember thinking "what don't I get? Why do we need getters and setters?". After some years (and discovering Python), I realized there's nothing to get, it's just ridiculous overengineering 95% of the time. Same goes for a lot of stuff in OO. I attribute it to the corporate mindset it seems to thrive in, but I could be wrong.


The important thing is restricting your public interface, hiding implementation details, and thinking about how easy your code (and code that uses it) will be to change later. It's not an OO vs anything thing.

When you want a value from a module/object/function/whatever, whether or not it's fetched from a location in memory is an implementation detail. Java and co provide a short syntax for exposing that implementation detail. Python doesn't: o.x does not necessarily mean accessing an x slot, and you aren't locking yourself into any implementation by exposing that interface as the way to get that value. It's more complicated than Java or whatever, here, but it hides that complexity behind a nice syntax that encourages you to do the right thing.

Some languages provide short syntax for something you shouldn't do and make you write things by hand that could be easily generated in the common case. Reducing coupling is still a good idea.


> The important thing is restricting your public interface

That is the important thing sometimes. At other times the important thing is to provide a flexible, fluent public interface that can be used in ways you didn't intend.

It really depends on what you're building and what properties of a codebase are most valuable to you. Encapsulation always comes at a cost. The current swing back towards strong typing and "bondage and discipline" languages tends to forget this in favour of it's benefits.


> At other times the important thing is to provide a flexible, fluent public interface that can be used in ways you didn't intend.

That scares me. How do you maintain and extend software used in ways you didn't intend?

Quality assurance should be challenging.


> That scares me.

It scares you because you're making some assumptions:

1. You assume that I'm writing software that I expect to use for a long period of time.

2. Even if I plan to use my software for an extended period of time, you're assuming that I want future updates from you.

Let me give you an example of my present experience where neither of these things are true. I'm writing some code to create visual effects based on an API provided by a 3rd party. Potentially - once I render the effects (or for interactive applications - once I create a build) my software has done it's job. Even if I want to archive the code for future reuse - I can pin it to a specific version of the API. I don't care if future changes cause breakage.

And going even further - if neither of these conditions apply the worst that happens is that I have to update my code. That's a much less onerous outcome than "I couldn't do what I wanted in the first place because the API had the smallest possible surface area".

I'll happily trade future breakage in return for power and flexibility right now.


Maybe instead of "restrict" it would be better to say "be cognizant of." If you want to expose a get/set interface, that's fine, but doing it with a public property in Java additionally says "and it's stored in this slot, and it always will be, and it will never do anything else, ever." I don't see what value that gives in making easy changes for anyone. I don't see why that additional declaration should be the default in a language.

You get into the same issue with eg making your interface be that you return an array, instead of a higher-level sequence abstraction like "something that responds to #each". By keeping a minimal interface that clearly expresses your intent, you can easily hook into modules specialised on providing functionality around that intent, and get power and flexibility right now in a way that doesn't hamstring you later. Other code can use that broad interface with your minimal implementation. Think about what you actually mean by the code you write, and try to be aware when you write code that says more than that.

I think it's interesting that you associate that interface-conscious viewpoint with bondage and discipline languages. I mostly think of it in terms of Lisp and Python and languages like that where interfaces are mostly conceptual and access control is mostly conventional. If anything, I think stricter type systems let you be more lax with exposing implementations. In a highly dynamic language, you don't have that guard rail protecting you from changing implementations falling out of sync with interfaces they used to provide, so writing good interfaces and being aware of what implementation details you're exposing becomes even more crucial to writing maintainable code, even if you don't have clients you care about breaking.

Of course all this stuff goes out the window if you're planning to ditch the codebase in a week.


I don’t think I’ve ever seen a useful “Getter” abstraction...


  getArea() {
    return this.width * this.height;  
  }

  getIcon() {
    // if icon hasn't been loaded, load it
    return this.icon;
  }


Those aren’t abstractions... Also, I’m not arguing that you can’t contrive an abstraction around a getter, I’m arguing that it’s useful to do so (so please spare me contrived examples!).


You're always using a getter. It's just a question of what syntax your language provides for different ways of getting values, and how much they say about your implementation.

Most people don't have a problem with getters and setters, they have a problem with writing pure boilerplate by hand. Languages like Python and Lisp save you from the boilerplate and don't provide a nicer syntax for the implementation-exposing way, so people don't generally complain about getters and setters in those languages, only in Java and C++ and things.


You misunderstood my post. I said I haven’t seen a useful getter abstraction. Not all data access is via a method nor is it always abstract.

I specifically object to the useless abstraction, not the boilerplate (boilerplate is cheap).


I think we're coming at it from different angles. My point is that there shouldn't be any abstraction to write, and it should just be the way the language works. Primitive slot access in Java is not just a get/set interface, it's a get/set interface that also specifies implementation characteristics and what the code will be capable of in the future. It should be in the language so that you can have primitive slots, but it shouldn't be part of the interface you expose for your own modules, because adding pointless coupling to your code does nothing but restrict future changes. Languages should not provide an easy shortcut for writing interfaces like that.

I don't view it as a useless abstraction, because I view it as the natural way of things. I view specifying that your get/set implementation is and always will be implemented as slot access to be uselessly sharing implementation details that does nothing but freeze your current implementation strategy.

I think a better question is when that abstraction gets in your way. When does it bother you that nullary functions aren't reading memory locations? Why do you feel that's an essential thing to specify in your public interface, as a default? There's nothing stopping you from writing code in Python and mentally modelling o.x as slot access, because it follows the interface you want from it.

If you only care because it's something extra you have to do, then that's what I meant by boilerplate. I think it's a misfeature of Java's that it presents a model where that's something extra you have to do.


> My point is that there shouldn't be any abstraction to write, and it should just be the way the language works.

I understand your point, but I think you misunderstand what "abstraction" means. "abstraction" doesn't mean "function" (although functions are frequently used to build abstractions), and if you have "dynamic properties" (or whatever you'd like to call them) a la Python, then you're still abstracting. My point is that abstracting over property access (regardless of property-vs-function syntax) is not useful, or rather, I'm skeptical that it's useful.

> I think a better question is when that abstraction gets in your way. When does it bother you that nullary functions aren't reading memory locations? Why do you feel that's an essential thing to specify in your public interface, as a default? There's nothing stopping you from writing code in Python and mentally modelling o.x as slot access, because it follows the interface you want from it.

I think this is a good question, because it illustrates a philosophical difference--if I understand your position correctly, you'd prefer to be as abstract as possible until it's problematic; I prefer to be as concrete as possible until abstraction is necessary. There's a lot of mathematical elegance in your position, and when I'm programming for fun I sometimes try to be maximally abstract; however, when I'm building something and _working with people_, experience and conventional wisdom tells me that I should be as concrete and flat-footed as possible (needless abstraction only makes it harder to understand).

To answer your question, that abstraction gets in your way all the time. The performance difference between a memory access (especially a cache-hit) and an HTTP request is several orders of magnitude. If you're doing that property access in a tight loop, you're wasting time on human-perceivable timescales. While you can "just be aware that any given property access could incur a network call", that really sucks for developers, and I see them miss this all the time (I work in a Python shop). We moved away from this kind of "smart object" pattern in our latest product, and I think everyone would agree that our code is much cleaner as a result (obviously this is subjective).

TL;DR: It's useful to have semantics for "this is a memory access", but that's unrelated to my original point :)


It's frustrating to read this thread and your comment kind of crystallized this for me so I'll respond to you.

Using an array without having to (manually) calculate the size of the objects contained within is like the major triumph of OO. This is a getter that you almost certainly use constantly.

Please try to consider your statements and potential counter factuals before spraying nonsense into the void


> Using an array without having to (manually) calculate the size of the objects contained within is like the major triumph of OO.

Er, aside from C and ASM, few non-OO languages require that kind of manual effort. That's not a triumph of OO, it's a triumph of using just about any language that has an approach to memory management above the level of assembly.


> Please try to consider your statements and potential counter factuals before spraying nonsense into the void

My claim was that getter abstractions as described by the GP (abstracting over the “accessed from memory” implementation detail) are not useful. Why do you imagine that your array length example is a reasonable rebuttal?


Its not the length of the array. Its using things like array[20]. Yes that exists pre-OO and outside of OO, but its the foundational aspect of OO and one of the strongest use cases.

Sorry for the way I communicated- I was tired and should have reconsidered.


> Sorry for the way I communicated- I was tired and should have reconsidered.

No worries, it happens. :)

> Its not the length of the array. Its using things like array[20]. Yes that exists pre-OO and outside of OO, but its the foundational aspect of OO and one of the strongest use cases.

I'm not sure what you're getting at then. Indexing into an array? Are you making a more general point than arrays? I'm not following at all, I'm afraid.


I think my argument is basically that arrays are effectively object oriented abstractions in most languages.

You aren't responsible for maintaining any of the internal details, it just works like you want it to. My example was with the getter for the item at index 21 (since you had specifically called out useless getters), but equally well applies to inserting, deleting, capacity changes, etc.


> I think my argument is basically that arrays are effectively object oriented abstractions in most languages.

I think I see what you mean, although I think it's worth being precise here--arrays can be operated on via functions/methods. This isn't special to OO; you can do the same in C (the reason it's tedious in C is that it lacks generics, not because it lacks some OO feature) or Go or Rust or lisp.

These functions aren't even abstractions, but rather they're concrete implementations; however, they can implement abstractions as evidenced by Java's `ArrayList<T> implements List<T>`.

And to the extent that an abstract container item access is a "getter", you're right that it's a useful abstraction; however, I don't think that's what most people think of when they think of "getter" and it falls outside the intended scope of my original claim.


Watch your tone!


> Using an array without having to (manually) calculate the size of the objects contained within is like the major triumph of OO.

I've used arrays in countless OO and non-OO programming languages, and I do not recall ever having to manually calculate the size of objects contained therein – what are you talking about? Only C requires crap like that, but precisely because it doesn't have first class arrays.


Downvoters, care to elaborate what you think is wrong with the above? Literally even fortran can do better than

   size_t len_a = sizeof(a)/sizeof(a[0]);
or

   my_pseudo_foo_array = (foo*) malloc(len * sizeof(foo));


You're not wrong. Even BASIC was better than this.


HTTP GET :-)


> Python doesn't: o.x does not necessarily mean accessing an x slot

C# also 'fixes' that. o.x could be a slot or it could be a getter/setter.


Initially seen in languages like Eiffel and Delphi.


I have basically no experience with Java. But in C# I think the above is whats behind stuff like

fooobj.events += my_eventhandler;


It is, but those languages did it about 6 years before C# came into existence.

Which isn't surprising, given that Delphi took the idea from Eiffel, which share the same Pascal influence, and was designed by Anders.


Not to mention Anders and C#.


I get that "what don't I get?" feeling all the time. Overengineering is basically an epidemic at this point, at least in the JS/front-end industry.

My guess is there's a correlation between overengineering and career success, which drives it. Simple, 'KISS' style code is the easiest to work with, but usually involves ditching less essential libraries and sticking more to standards, which looks crap on your resume. Most interviewers are more interested in whether you can piece together whatever stack they're using rather than whether you can implement a tough bit of logic and leave great documentation for it; so from a career perspective there's zero reason for me to go for a (relatively) simple 100 line solution to a tough problem when I can instead go for a library that solves 100 different use cases and has 10k lines of documentation that future devs have to wade through. The former might be the 'best' solution for maintainability but the latter will make me appear to be a better engineer, especially to non-technical people, of which there are far too many of on the average team.


Thanks for that. That resonates a lot with me. It makes me feel better realizing that I'm not alone in thinking that.

Recent writings by Joe Armstrong are also resonate with me the same way.


Well, it depends on what you are doing. I designed some systems that were too complex and some that were too simple and couldn't grow as a result. So, with experience, one will hopefully see that supposed overengineering is sometimes only overengineering until you actually need that specific flexibility in a growing system. And there is little substitute for experience to know which is which.


In the original JavaBeans spec, getters and setters served two purposes:

1. By declaring a getter without a setter, you could make a field read-only.

2. A setter could trigger other side effects. Specifically, the JavaBeans spec allowed for an arbitrary number of listeners to register callbacks that trigger whenever a value gets changed.

Of course, nobody actually understood or correctly implemented all this, and it all got cargo culted to hell.


Finally someone mentions using getters to create read only fields. Objects are the owners and guardians of their own state. I don't see how this is possible without having (some) state-related fields that only can be read from the outside.


Pretty obvious to readers of "Object-Oriented Software Construction" from Meyer.

A big problem is cargo culting without reading the CS references.


IME the thing with getters and setters is that everyone is doing it (inertia) and that other options either suck (syntactically) or break the "everything is a class" constraint.

Ruby is far from being my favorite language, but I like how Structs "solve" the getter/setter problem in it:

    my_struct = Struct.new(:field_one, :field_two)
It doesn't clutter your code with multiple lines of boilerplate, and it returns a regular class for you to use, not breaking the "everything is a class" constraint.


In my opinion OO design is valuable in extremely large code bases and/or code bases that will likely exist for decades and go through multiple generations of significant refactoring.

With respect to your setters and getters question, particularly in regards to Python... The @Property feature in Python is just a specific implementation of the setters/getters OO design principle. I can easily be convinced typing foo.x is better than foo.getX(), but I have a hard time having a strong emotional reaction to one vs the other if the language allows them to have the same benefits.


I feel like what happened to agile development happened to OOP, people morphed it into something that it was never meant to be.


Yeah, somewhere it stopped being about modeling your problem and it became a code organization technique. There was an incredible effort to formalize different modeling techniques/languages but it’s dried up.

It seems to be what we do, I’d say fp is in the same place. My CS program was heavily built around the ML family of languages, specifically Standard ML, with the algebraic types, functions, pattern matching (on your types,) etc. it seems like that “functional programming” is a radically different thing than what people do in js or erlang and call it that. It all comes around, I guess, static types were pretty gauche 10-15 years back and now how many folks are using typescript to make their js better?


Evolutionary design is that way in general really. Your intentions never matter - just what it can be used for.


Either you tell your objects what to do, which means they have mutable state, which means you are programming in an imperative way.

Or you get values from your objects. You need getters for this, but you can guarantee immutability and apply functional programming principles to your code.

You can't have your cake and eat it too. At the end of the day, you need values.


It's harder to write simple code because that requires a crystallized understanding of the problem. You can start banging out FactoryManagerFactories without having the faintest idea of the core problem at hand. Maybe some of the silliest OO patterns are like finger warmup for coders? Unfortunately that stuff still ends up sticking to the codebase.


Getters and setters make more sense in languages where you can't override attribute lookup.


That sounds wise but it doesn't really mean anything. There are things that suck about a Ford Pinto and there are things that suck about about a Tesla Model S, but saying that they both have their downsides is technically true while obscuring the fact that the Tesla is a muuuuuuuuuuuuuuch better car.


Good choice of example. Both the Pinto and Tesla, unlike most cars kill(ed) their owners in surprising ways.


Is there a name for this particular argumentative fallacy?


False equivalence - even if they are similar in one respect doesn't erase the other differences. Saying both a bicycle and a truck can move things on roads and can kill you if run over by one are technically true but misses many other larger differences.


OO is the worst programming paradigm in the world except for all the others.


Admittedly I'm somewhat of a FP fanboy, but I seriously cannot disagree with you more on this.

Functional Programming (and Logic Programming) are better than other paradigms because, unlike Java (or C++, or C#...) there is an emphasis on correctness, and the people working on FP compilers (like Haskell and Idris) are utilizing mathematics to do this.

No idea on your opinion on mathematics, but to me Math/Logic reign supreme; the more mathematically-bound your program is, the less likely it is to do something you don't want later.

Compare this to Java. It's 2019, and we're still doing `if (x==null) return null` all over the place (I'm aware that the option type exists but that doesn't really help when it's not enforced and none of my coworkers use it). How about having to create six different files for something that I could have written in 20 lines in Haskell? Or how about the fact that the type system exists to help with optimizations, and due to a lack of support for structured typing, it can only be useful for that.

I realize that I'm picking on Java, but Java is the biggest target when it comes with OOP as the industry understands it. I personally cannot stand having to create fifty files do to something like a database wrapper, and in Java that's effectively the only way to program.


> I realize that I'm picking on Java, but Java is the biggest target when it comes with OOP as the industry understands it. I personally cannot stand having to create fifty files do to something like a database wrapper, and in Java that's effectively the only way to program.

I had this experience once in a Rails shop.

A simple database table mapped to a CRUD API endpoint would take from five to ten files. That amounted to about 500 lines, plus a lot of tests for each class.

I never really understood why programming became so verbose. In an ideal world I'd have a declarative API that mapped the table to the API for me automatically. In a realistic timeline I'd just use the traditional Rails approach and be happy. But the people working there preferred to use complicated patterns and a lot of boilerplate before they were needed, even though the project was perpetually late and riddled with bugs. I wish we could give a chance to simpler ways of solving problems.


Yeah, I had similar issues with Rails as well; I feel people can be a bit too liberal with creating files, but I personally follow this mantra when I do it: does the benefit of separation worth the obfuscation introduced by adding a partition? Sometimes it is, and then I make a new file.

This is a bit of shameless self-promotion, but I've actually written a framework that's MVC-ish that lets you create really declarative APIs. The first version is written in NodeJS that I actually deployed in production [1], and I have an Erlang port that's semi-complete that I've recently started hacking on again [2], with the whole crux of it that you should be able to simply declare the composition of your actions.

[1] https://gitlab.com/tombert/frameworkeyPromiseEdition [2] https://gitlab.com/tombert/Frameworkey-Erlang


We are currently in a swing back in favour of statically typed languages. People seem to have forgotten why we previously had a huge trend towards more expressive, less strict, dynamically typed languages.

Maybe we learn something each time the pendulum swings but as someone knee deep in C# at the moment, the quality of the APIs I have to deal with are far below those I was used to in Python (at least in terms of elegance and usability).

I'm not sure whether these flaws are inherent or whether it's possible to have one's cake and eat it.


I don't think it has anything to do with dynamic or static typing, but more to do with teams, libraries and program design.

I've had terrible experiences with complexity and verbosity in Ruby and Python codebases, which are dynamically typed. On the other hand, I worked with super expressive and simple to work codebases in C# and Haskell. And I had the opposite experience as well in other times.

It is absolutely possible to have the cake and eat it in this regard.

In fact I'd consider Haskell way more expressive than any dynamic language I ever worked with.


I love Haskell, and I agree that it's expressive, but any language with a nominal type system like Haskell is inherently going to be less expressive than a dynamic language.

Compare these functions, one in JS and one in Haskell:

    function F (x) {
       var first = x.first;
       var second = x.second;
       return first + second; 
    }
vs.

    F :: (HasX a, HasY a) => a -> a
    F foo = (x foo) + (y foo)

(I'm a little outta practice with both langauges, but my point will still stand)

With the JS version, F can take in any expression that has the properties of `x` and `y`, while with the Haskell version, the type has to implement the typeclasses `HasX` and `HasY`. While the Haskell version is still better than something like Java because you can implement a typeclass without modifying the core datatype, it's still inherently less expressive.

I'm not saying that it's not worth it (cuz Haskell is awesome for everything but records), but it's still less immediately reusable.


Step 1 is to use a static analyzer. Enforce null checks and finals and such. I think cresting a lot of files only hurts up front but I will give up more keystrokes in favor of unequivocal stack traces any day. Also javadoc is the best.


none of my coworkers use it

This sounds like the root of your problem. If you want to do FP, and none of your coworkers want to do FP, then your problem isn't really the language.

If you're in a Java shop, maybe start by evangelizing FP rather than a totally different language/platform? It's possible to do FP in Java, and (IMO) it ends up pretty reasonable. But it's not the default habit for J Random Java Programmer, so they need training.


Sure, but as John Carmack has said, if the compiler allows something, it will end up in the codebase once it gets sufficiently large.

I work for a brand-name big company (I won't mention it here but I'll tell you if you email me) that hires incredibly talented engineers that are a lot smarter than me. The codebase I work on is around ~20 million lines of Java, and I've seen stuff in there that is so incredibly gross that a compsci professor would write "see me after class" if you submitted it.

Example: I once saw a piece of code doing this:

    do {
        // doing stuff
    } while (false)
It took me about 10 minutes of digging into the code to realize that the person who wrote this was doing this so that they could add a `break` in there as sort of a makeshift `goto` so they could early-exit and skip all the rest of the stuff in the block. Needless to say, I was horrified.

Why is it that incredibly talented engineers are writing awful code like that? It's certainly not incompetence; what almost certainly happened was that there was some kind of time-crunch, and the dev (understandably) felt the need to cheat. This is a direct consequence of the compiler allowing a bad design.


That loop is totally fine. I wrote it many times in C#, C and C++, and I saw it many more times in other people’s code.

Microsoft DDK sample: https://github.com/Microsoft/Windows-driver-samples/blob/mas...

Microsoft MediaFoundation sample: https://github.com/Microsoft/Windows-classic-samples/blob/ma...

OpenSSL: https://github.com/openssl/openssl/blob/master/crypto/sha/sh...

And many others.

Just because code looks unfamiliar doesn’t mean it’s something wrong with the code.


Functional programming has no relevance to the correctness of code.

You'll notice that it's virtually unheard of to use any FP languages in critical software. Instead they use languages that lend themselves well to code reviews, static and dynamic analysis, model-based design and proofs, etc. Like C, Ada and some domain-specific stuff.

The kind of "correctness in the small" offered by Haskel through its type system can be obtain also in languages like C++, Swift and others. With the additional benefit of massive market share, teaching resources, mature tooling and so on.


>You'll notice that it's virtually unheard of to use any FP languages in critical software.

Erlang powers around 40% of the world's phone networks; and if it's not mission-critical I'm not entirely sure what is.

For that matter, Whatsapp is also written in Erlang and Jane Street does trading applications in OCaml. Without making a judgement on whether or not they should, both Whatsapp and Jane Street create very large apps and have created successful businesses with FP.

> Instead they use languages that lend themselves well to code reviews, static and dynamic analysis, model-based design and proofs, etc

I can't tell if you're being serious; are you suggesting that Functional Programming doesn't lend itself to proofs? Really? Have you ever heard of Coq or Idris or Agda? They literally have modes to prove the correctness of your code.

What about functional programming doesn't lend itself to code reviews? I did F# for a living for two years and we had regular code reviews. I also used the .NET performance profiling tools which worked fine for F#.

> The kind of "correctness in the small" offered by Haskel through its type system can be obtain also in languages like C++, Swift and others.

Uh, no. Sorry, that's just flatly wrong.

Yes, static analysis tools are awesome, but you will never get the same level of compile-time safety from C++ that you will from Haskell or Rust or any number of functional languages. The type systems offer very little information, making it impossible for the compiler to shield anything.


TLA+, SPARK, Frama-C, PROMELA, Astree, even plain C or C++ and a heavily safety-oriented process are used when correctness is important more than any of the FP languages you've mentioned.

In fact, by mentioning Erlang and Ericsson, you exhausted the only case supporting your point. Maybe if you tried hard, you could come up with a couple more. Now let's do the same exercise for the languages and tools I enumerated and it will take a long time until one runs out of examples.

WhatsApp is another perennial example in these discussions. I can accept it although there's nothing critical about a chat app - and once again it's rather an exception instead of the rule. Most chat applications are written in "not FP" programming languages and work just as reliably as WhatsApp.

In case it's not yet clear from the above, I believe that only tools which are used heavily in the industry deserve our attention, not obscure languages which haven't been put to the test and one off projects. The oldest trick in the FP argument book is finding some minor FP language to match any requirements put together by critics. So yes, I've heard of Idris and Agda - on HN - because barely anyone else uses them or talks about them. Coq is perhaps the outlier, because it was used to verify CompCert, but then again CompCert itself is used to implement a lot more things.

But Coq, Idris, Agda and so on are actually red herrings, because when people praise FP's correctness benefits, they refer to standard languages like Haskell, F# or OCaml for which there is in fact little proof that they have a significant effect on program correctness. Obsessively encoding information in the type system will reduce or eliminate some types of errors, but that's far from proving a program correct and really not that far at all from what's available in other standard, mainstream languages, for less effort, better support and a great ecosystem.


I don’t know anything about SPARK or FramaC, but TLA+ isn’t a programming language, and you can use it to model distributed functional apps just fine (I still do).

Even if the Erlang/Ericsson stuff is the “only case” (It’s not) I do not see how that makes my point less valid; Erlang was specifically design for systems that cannot fail. Telephones are just a good example of that.


And doesn't your TLA+ model make your functional code significantly more reliable? Guess what, it does the same thing for OO languages => no need to use FP to increase reliability.

Same goes for the other tools or languages I mentioned.


Having proper support for option or sum types is an orthogonal question to if it is object oriented or not. Crystal is an OO language that have sum types, for example (and yes, nil is separated from other types, so a method returning a Duck will really do that, and it won't return nil unless the signature would be Duck | Nil).


> How about having to create six different files

Took me a few reads but it's better stated "six different classes". At first I was confused about why you rely on `java.io.File` for business logic.

So, if I'm stuck on JVM, what's my FP alternative that compiles and runs comparatively? Clojure? Scala?


Clojure.

If I'm stuck in JVM-land, Clojure followed by straight, modern Java would be my choices.


Scala. Particularly Scala 3.

Because it is principled but also very pragmatic.


Scala failed because it's the opposite of pragmatic. If you're looking for pragmatic, take a look at Kotlin.

As for Scala 3, it's still years away, if it ever comes out. And when it does, there's little reason to think its goals will be different from what Scala 2 was (an academic language) since it's the same team as Scala 2 writing it.


In what way is Scala not pragmatic?


It's a language that's aimed more at research, producing papers for conferences, and financing the EPFL and its PhD students than at users in the real world.

There's absolutely nothing wrong with that, by the way, I love studying all the advanced concepts that Scala has pioneered over the years.

But it's also the reason why it's largely in decline and why Kotlin has taken the industrial world by storm: because it is a pragmatic language.


I've used Scala in production environments, and we never had any problems with it being too academic. SBT sucks, but that's another issue.

Kotlin doesn't have typeclasses (something you get as a side effect of Scala implicits), ADTs, or true pattern matching (along with exhaustivity checks). In combination, all of those allow for expressive, easy-to-read code that, in my experience, tends to have few bugs. Kotlin is a step backwards from that. It's still a significant step up from Java, however.

F# is the only other language I've used that I've found comes close. However it lacks typeclasses, and the large Java ecosystem.


A step backward to you is a step forward in pragmatism for the rest of the world.

I understand the value of higher kinds and I'm comfortable with Haskell, but it's pretty obvious to me why Kotlin is succeeding where Scala failed.

Sometimes, improvements in programming languages are reached by having fewer features, but Scala is a kitchen sink that was always unable to turn down features, just because their implementation would lead to more research papers to submit to conferences.

As a result, we ended up with a monster language that contains every single feature even invented under the sun.


AFAIK Scala is much more popular than Kotlin in terms of job postings and projects in big enterprises. My data limited to some Fortune 100 companies tell me it is on par with Python in popularity. Spark, Kafka, Flink, Finangle are written mostly in Scala. Pretty impressive for an academic, non-pragmatic language that has failed, isn't it?

So can you elaborate what do you mean by "failed"? Because it seems you are using a different definition of it.

"Sometimes, improvements in programming languages are reached by having fewer features, but Scala is a kitchen sink that was always unable to turn down features, just because their implementation would lead to more research papers to submit to conferences."

That's some different language you're talking about. Scala is built on a small set of very powerful, general, orthogonal features which cooperate nicely and allow to build most of the stuff as libraries. Its design is much more principled than Kotlin's. Kotlin has special features built into the language, that Scala needs just a library for.


Just a note; while there aren’t typeclasses in F#, you can use statically resolved generics with a member constraint, getting you pretty close.


Kotlin is just Scala with a few most advanced features taken out with no good replacement. It is not even really much faster in compilation speed when you account for its verbosity [1] and it has worse IDE support limited to just one IDE. Jetbrains is not interested in supporting other IDEs than its own. So what is so much more pragmatic about it?

Also there is no decline in Scala usage, and Kotlin doesn't exist outside its Android niche really. So "taking by storm" is a wishful thinking.

[1] https://stackoverflow.com/questions/34615947/why-does-kotlin...


Yes, clojure


OOP has evolved. If you want to get a clearer idea of where it's at, look at Kotlin instead of a twenty-five year old language.


Sure, Scala has some neat features too (though I'm not 100% sold on the language).

That said, and I addressed this specifically, when I say "OOP" in the software world, people typically think of Java, C++, or C#, and those are what I'm addressing specifically.

I suppose in the most technical sense of the word, you could argue that Erlang is OOP at some level, and Erlang is awesome, so if we want to play with definitions then sure, I'll concede that OOP is good, but until the industry as a whole agrees on these terms, and doesn't treat OOP as a synonym for "Java/C++/C#", I'm still going to say that I hate OOP.


You could be doing instead

    ((if (null? x)
      ;.....
      ))


If this were clojure, I'd probably use a (some->...) or (some->> ...) macro, so it would be a non-issue.


Shouldn't that be (if (nil? x))?


Probably, it's been a while since I have done Lisp in anger.


I think this is pretty insightful.

It's easy to hate on OO because of something along the lines of it not being a neat mathematical formalism, which can facilely be argued as strictly a deficiency: if you don't look too closely, it certainly appears as only a deficiency.

I think a deeper look inevitably runs into two things:

(1) certain domains are more easily approached through spare mathematical formalisms than others. E.g. if the domain you're modeling is already most easily thought about in terms of compositions of mathematical transformations, you should probably model it functionally.

(2) Finding a declarative characterization of the results you'd like, or a neat chain of functional compositions which produce it, typically takes more work up front. (For many projects, the initial work up front is worth it—but for lots and lots of others, it's essentially over engineering.)

OO is often not 'ideal,' but frequently, solidly pragmatic.

As a paradigm, the aesthetic behind it reminds me of TypeScript's designers intentionally foregoing soundness of the type system.


This is a decently clever Churchillian quip. At the very least, it does not deserve downvotes.


OO languages work effectively in spite of OO features. Sounds like a hot take, but throw away inheritance altogether (or use it to automatically delegate to some component, like a dodgier version of struct embedding in Go), use interfaces if the language doesn’t support first class functions, etc and you’ll be effective, which is to say, write it like you would write Go or Rust or similar.


> I still think OO provides a pretty easy mental framework for programming. You can get good results.

The problem is that OOP is a slate of something like 18 characteristics and no language ever picks the same choices.

That having been said, the big problem with (especially early) OOP is that "Is"/"IsA" (aka structural inheritance) is the primary abstraction. Unfortunately, "Is"/"IsA" is a particularly lousy choice--practically anything ("Contains" or "Accesses" or ...) is better.

Most of the modern languages designed in the past 10-20 years reflect this--"Traits"/"Interfaces" seems to be what everybody has settled around.


I think OO can work because in many problems we only focus on one thing at a time. If multiple objects with equal complexity/importance are involved, OO can get sticky (e.g. which object should invoke a method, etc). I think Joe's article is intentionally provocative to make a point, but I'd like to see more discussions about when and why OO doesn't work well sometimes and what the course of actions we should take.


I've upvoted you because you spotlighted a very important issue. In OOP we are supposed to think of a program as little pseudo-isolated programs that somehow work together to fulfill the technical requirements.

This model works where it actually represents the real world: Mostly, in distributed systems.

In other areas, it just leads to overengineered piecemeal crap that is incredibly hard to understand. Where you can get control over what happens, you absolutely should get it. Don't act like your program was a thousand little independent components that have their own mind and lifes. Because it isn't like that, and if it was, there was no way you could actually get in control of these to make them produce a very specific outcome.

So the only reason why many OOP programs sort-of work is because programmers never actually respect the abstractions that they set up by defining so many classes and methods. To get the program to work, one needs to know very precisely what each class does in each cases. In the end OOP is just a terrible farce since there is no rhyme and reason for all these classes. It's needless bureaucracy, and prevents us from structuring programs in a more efficient and maintainable way.


The basic situation is this. We often have a situation in which N operations contain M cases (for M different types).

Without OOP, we have the ugly organization of writing N functions that each dispatch M cases of code by pattern matching or switching on a numeric type field or whatever.

OOP lets us break these pieces of logic into separate methods. And then in the physical organizationof the program, we can group those methods by type. For any given type, we have N methods: each one implements one of those N functions just for that type.

This is a better organization because if we change some aspect of a type, all the changes are done in one place: the implementation file or section of file for that type. Or if a new type is added, we just add N methods in a new file or section; we don't have to change the code of numerous functions to introduce new cases into them.

Those who write articles opposing OOP never seem to constructively propose an attractive alternative for this situation.

It is this attractive program organization which swayed developers toward OOP, or even full blown OOP evangelism. It's not because it was hyped. OOP has concrete benefits that are readily demonstrable and applicable.

OOP is what allows your operating system to support different kinds of file systems, network adapters, network protocols, I/O devices and so on.

It's unimaginable that the read() system call in your kernel would contain a giant switch() on device type leading to device-specific code, which has to be maintained each time a new driver is added.


Well yeah, that's the expression problem[0].

With OOP I can add a new datatype easily, but when I want to extend the behavior of that type I now need to go to M different places. With a functional style I only need to do one. You're open on types but closed over behaviors. Functional styles are the opposite.

In some sense, I would even go as far as saying the idealized 'UNIX philosophy' is a degenerate example of this. We have a very limited set of types (the file) and a bunch of independently implemented behaviors. Imagine implementing sed or grep on a per-file (or per filesystem) basis.

These both get really interesting when you consider libraries/user extensibility, since unrelated actors could now add either new types or new methods. Most languages just punt on this by banning one or the other.

A pattern matching style would allow me to add a new sql() system call to query into the filesystem. Look at how much trouble there is adding new features to CSS, TCP, Java, etc trying to coordinate among so many different actors.

Or consider the case of a programming language AST. I can make a pretty printer, an interpreter, an optimizer, a type checker, a distributed program runner. But trying to do that with an OOP style is much harder for a large AST.

At the end of the day, we have NxM (type, behavior) pairs and there are pros/cons to each way of slicing them.

[0] https://en.wikipedia.org/wiki/Expression_problem


That's why you need typeclasses like in Haskell or Scala. And then you can be open on behaviors and types at the same time.


> I still think OO provides a pretty easy mental framework for programming

Very true. It's a practical solution to a complex problem. However, when systems get complex, it becomes very hard to find the right object / type to bottle up logic. Perhaps, then, a mix of OO and functional is the solution.


I'm really sick of these 'why blah sucks' posts. Clearly OOP works for a lot of people. If it doesn't work for you, don't use it. My personal feeling is that FP works better when the problem domain is more data oriented, requiring transformation of data streams whereas OOP is good when the problem domain is about simulating or modeling where you want to think about interacting agents of some kind. The whole 'X is one true way' argument is narrow sighted. I feel the problem should always precede the solution


When I was a tutor (TA) at university (collage) here in aus, I marked assignments from my students. We used an automated test suite to check correctness. I went over each assignment to subjectively assess code style. I would open the first assignment which scored full marks with the test suite and find it was a clean 500 line long implementation. Full marks. The next submission also got full marks, but it did it by spending only 200 lines. How? Was it overly terse? No... it looked clean and decent too. I would go back and look at the first submission and wonder - if you asked “could you throw away 60% of this codebase without sacrificing readability?” I would have answered of course not. But I would have been wrong. Silently, uncorrectably wrong if not for my unique opportunity.

In the programming work you do regularly, do you think there is a way you could structure your code which would allow you to do 60% less work, without sacrificing readability? If there is, would you know? The answer was clearly yes in the web world when jquery was king. Or cgi-bin on Apache. Is it still true today? Can we do better still?

If there is, it would probably demand a critical look at the way we approach our problems, and how we structure logic and data. The value in articles like this is to point at that question. For what it’s worth, I agree with Joe Armstrong and others who have been heavily critical of OO. Software which goes all-in on OO’s ideas of inheritance and encapsulation seems to consistently spend 2-3x as many lines of code to get anything done, compared with other approaches. (Looking at you, enterprise java.)

You’re right - the problem should proceed the solution. But our tools are the medium through which we think of that solution. They matter deeply.


I think this is mostly a reflection of the thing Java/C#/C++ have popularized as “OOP”: if one uses Common Lisp’s CLOS, much of the boilerplate associated with “design patterns” and OO architecture evaporates.


Yes absolutely. The article was written around 2000 when Java was the new sexy thing. When Joe talks about OOP being overhyped, he wasn’t talking about Rust’s traits or Common Lisp. He’s speaking about the hype around Java and C++, and the then-lauded three pillars of OO: Encapsulation, Inheritance and Polymorphism.

Not all OO works that way. In retrospect, inheritance was probably a mistake. And as far as I can tell, modern “OO-lite” coding styles focussing on composition over inheritance work pretty well. Alan Kay: “When I invented object oriented programming, C++ was not what I had in mind.”


Actually, I said: "I invented the term "object-oriented", and I didn't have C++ in mind".

In other comments here I explain why "object-oriented" was a too quick and bad choice for what I thought I was doing ...


I learned OOP in 1992, using Turbo Pascal 5.5, and got hold of Turbo Pascal 6.0 with Turbo Vision, shortly thereafter.

My follow up OOP languages until 2000 were C++ alongside OWL, VCL, MFC. Clipper 5.x, Eiffel, Sather, Modula-3, Oberon variants, Smalltalk, CLOS, SWI Prolog, Delphi and naturally Java.

In 1999 I got a signed copy from the ECOOP 1999 proceedings, full of alternative OOP approaches.

We should strive to actually provide proper bases to CS students, instead of market fads.


My inclination is to say that inheritance isn't the mistake, the mistake is making methods part of a class: my experience with inheritance in CL is that having generic functions/methods as their own first-class construct makes inheritance less of a minefield.


Thanks, this is a great comment.


The post was "why OO sucks," not "why nobody should ever use OO." The distinction is important because everything sucks a little bit—especially in computer programming.

Understanding the objections to various programming paradigms can help improve how you use them, by having an awareness of what others consider potential minefields. (And who knows, maybe the arguments will change your mind. You shouldn't be so quick to prejudge the material.)


> I'm really sick of these 'why blah sucks' posts. Clearly OOP works for a lot of people. If it doesn't work for you, don't use it.

Most of us work in teams. If people I work with believe something that's not true, then it directly effects my work. If the majority of the profession believes something, it significantly effects my entire career. I don't actually hate OOP, although I have criticisms, but the attitude of "if you don't like it go somewhere else" is missing the point. Criticism isn't meant to be mean or nasty, it's meant to point out bad thinking for the benefit of us all. What bothers ME is the positivity police on hacker news that thinks that "if you don't have anything nice to say don't say anything at all" applies to all of life.


> If it doesn't work for you, don't use it.

How many of us actually get a choice in this...?


I wonder if it would be constructive to modify the OP's statement a little bit. You can get there from here using any approach (as long as it is Turing complete ;-) ). Some approaches will work better than others, but optimising your approach first and convincing others second is putting the cart before the horse. Having a happy team that works well together is going to provide at least an order of magnitude more ROI than choosing the best approach. Compromising on your approach to make others happy will almost certainly pay off hugely. Get good at as many types of approaches as you can so that you can take advantage of those payoffs. The cult of "I must use the absolute best approach for this problem, everyone else be damned" is one that leads to misery IMHO (especially if it turns out that your "best approach" isn't, which happens most of the time in my experience ;-) ).


These days many languages are so expressive you can establish a dominant paradigm in the part of the code you work on. We have a kind of micro-level choice that even the most authoritarian code-reviews can't largely stamp out (provided that the interface you provide is in harmony with the rest of your organization.)


Also, for how many does it really work?

I mean, most devs don't really know anything else, just accept it and try to get by...


Actually, this is not really about if OO sucks, but to reflect on one of the articles Joe Armstrong wrote. Joe passed away two days ago.


Please refrain from such self-centered, flippant dismissals. They tend to get in the way of potentially good discussions.

>>Clearly OOP works for a lot of people.

Something can work for a lot of people, and still suck.

>> If it doesn't work for you, don't use it.

Language and tool choices in our industry are made by a tiny minority. In fact, sometimes the people making those decisions are not even developers themselves! From that perspective alone, articles like this one are valuable.

Aside from that though, the question isn't whether OOP "works" or not. Rather, it is when it works, and for how long, until you run into a myriad of problems, such as leaky abstractions or inheritance hell. These are worth discussing. If you disagree, you can move on. No need to voice your misgivings.


Please refrain from telling others to refrain. The negatives are worth discussing, but not the positives? I want to hear everyone's thoughts not only yours.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: