Hacker News new | past | comments | ask | show | jobs | submit login
Why OO Sucks by Joe Armstrong (2000) (cat-v.org)
209 points by crazypython on March 25, 2021 | hide | past | favorite | 278 comments

This is kinda funny. I wrote a "rebuttal" article back in 2009:


...and then I met Joe not long after and we also discussed this in fact. He then told me that he felt my article was fair (!) and that he had *changed his opinion on OO since writing that article*.

IIRC his exact words were something along the lines of "I did not understand OO when I wrote that article". Now... he also argued Erlang is in fact VERY OO, and in some ways he is correct, since its very focused around autonomous "parts" communicating solely via messages.

Finally, I haven't read all 162 comments here - but OO is not "bad" nor "the holy graal". There are different ways of doing programming, and its as simple as that. I can find joy in simple imperative coding as much as I can long for the days I was working in pure OO in Smalltalk. :)

> Finally, I haven't read all 162 comments here - but OO is not "bad" nor "the holy graal". There are different ways of doing programming, and its as simple as that

What are we programmers supposed to do with all of our free-time unless argue about what is the holy grail versus which is not?! I feel so empty.

Jokes aside, I think this is the most important point really. There is no single language/paradigm that works best for everything, it all really depends on the context, environment, and so much more. It's easy to forget on the internet, that someones environment might be vastly different than yours, and think your solution will work for others because you miss their context.

OOP is hated all around the world, and still solves problems. Functional programming is loved all around the world, but is still not the right solution for everything.

Strangely a lot of the functional programming advocates are fine with closures, which are guilty of all the same issues -- private state, hidden state, combining data with functions. Closures are basically lightweight objects. Really the debate should be about whether you are dealing with pure objects or not, rather than whether you are binding code and data into a single variable that is passed around.

I once saw a python function that returned two other functions, a, b, such that depending on what was passed to a, the return value of b would change. Sounds terrible, right? Well, A was set_test_parameters(params) and B was get_test_results(test_run).

In OO parlance, they were two methods of the same test_ object but because they were returned as closures the underlying object was hidden as an internal variable in the function that returned a and b. In OO parlance that would be a factory. Wonder what Joe Armstrong would think about that -- it was both cringeworthy and also quite efficient, as the code that consumed A didn't need to know about B and vice versa. One could even think of A and B as two ends of a pipe, or as pointers to inputs and outputs of an unspecified function.

Being able to cleanly design a complex program so accurately that only immutable types are used is difficult. Then throw in large numbers of junior engineers working in tight time windows, subject to constantly changing requirements, and the odds of that codebase being pure a few years out is basically zero. In the end, we have to ship.

"Obligatory c2.com reference": https://wiki.c2.com/?ClosuresAndObjectsAreEquivalent

""" The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said "Master, I have heard that objects are a very good thing - is this true?" Qc Na looked pityingly at his student and replied, "Foolish pupil - objects are merely a poor man's closures."

Chastised, Anton took his leave from his master and returned to his cell, intent on studying closures. He carefully read the entire "Lambda: The Ultimate..." series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress.

On his next walk with Qc Na, Anton attempted to impress his master by saying "Master, I have diligently studied the matter, and now understand that objects are truly a poor man's closures." Qc Na responded by hitting Anton with his stick, saying "When will you learn? Closures are a poor man's object." At that moment, Anton became enlightened. """

> Strangely a lot of the functional programming advocates are fine with closures, which are guilty of all the same issues -- private state, hidden state, combining data with functions.

In functional programming closures (like other functions) only hold immutable data and cannot have side effects, so there is no private/hidden state involved. Code which can mutate state or produce side effects is procedural, not functional.

this is misguided to such an extent

As bizarre as the stated API sounds, it's actually a good choice for a lock-free triple buffer where set() and get() are called on separate threads and shouldn't block each other: https://lib.rs/crates/triple_buffer

I think it's also useful for SPSC channels.

Such an extent mere words can't even comprehend your reasoning.

"hated all around the world" sounds a bit harsh :) Sure, lots of people like "throwing hate" at it, but AFAICT it's mostly "bad use" that gets people riled up like insane deep inheritance or overly complicated frameworks. Also, it seems especially young guns feel cool by claiming FP is superior ;)

AFAIK OOP languages are still ruling the whole GUI space, which is reasonable since GUI was the main driving force behind it (see history of Smalltalk) and the domain is such an obvious candidate for composition of separate independent elements. What languages are primarily used for GUIs today? Java, C#, Swift, ObjC, C++, Dart, Kotlin ... and the list can probably be made much longer.

On the web ... well.

Not sure if OO really rules the GUI space. Afaik, at least Swift, Dart (Flutter) and Kotlin are trying to solve the mismatch between OO and Reactive Programming (where functional approaches very helpful)

Most of GUI development is pure data handling (render state and react to interactions) where a strong functional toolset is much appreciated (lambda, pattern matching) and some parts are well done with closures (where OO comes in)

Ultimately I would love to have a GUI development language that allows me to use one set of language features and constraints when I really just want to use functions on data immutable data, strong pattern matching and ADT for modelling) and another when I need strong closures, with message based dispatches and reactivity.

Basically, whatever React does, minus the constraints of requiring JS compatibility.

I'd say OO still does rule the GUI space. The various reactive programming GUI tools are still struggling to reach the levels of complexity of UIs that were already common place in the 90s. Functional GUIs have their own problems with giant balls of state that makes composition difficult. So we break it down into smaller bits and it starts to look like objects again.

IMO the functional GUI push has less to do with functional GUIs being superior and more about dealing with the piece of shit that is the DOM. Native environments that don't have to deal with the DOM don't get the same benefit and have new problems that are solved by more OO approaches.

It all depends if you consider basically all non web GUIs developed the last 40 years or if you limit yourself to reactive web UIs developed the last few years. Tongue in cheek :)

Sure, reactive frameworks in js that mainly operate on the DOM/HTML/CSS stack may be very little OO, but Swift/Dart/Kotlin are OOP languages and Flutter for example, while using a reactive model, is still 100% OOP with Widget classes in a composition and so on.

Well non-web GUIs have been basically stagnant for a good chunk of that 40 years :)

You're right that it's OO under the hood. The latest thing I've used is CLJFX which is a React-like wrapper around JavaFX. I don't feel the OO side has much effect on my experience using the library. It's still events, callbacks, state-updates. The class hierarchies are irrelevant to the library user the vast majority of the time.

I wouldn't be surprised if for the library writers it provides benefits with code reuse and type gymnastics. But for mere mortals that need to refactor and reorganize our messy ideas OO hierarchies seem scary after React-like development.

Not sure if you are sarcastic or not. But if you look at how you develop windows app (vb, winforms, wpf, uwp and now maui) or Mac/iOS (from mvc, mvvm, and now mvu) and not forgetting Delphi. I find it far from stagnant.

With React you just switch your complexity. I don't feel more productive with my functional components than with my class component. I even more scared to introduce a memory leak with my useEffect than I was before.

But I do feel more productive with svelte or Blazor. That make me realize that OO vs FP is the the real vector. Reactive programming with good design is.

Like you said reactive programming and OO are not exclusive.

And if you look how you do React now with useState, useEffect and all. It's exactly a reproduction of OO, but inside a function that don't behave like a function anymore. And goes HOC for the inheritance/mixins/composition you flush down the drain.

Well the DOM is basically an OO model. If you are going to have a retained mode UI it's basically going to look like an OO model.

In the end it is probably most important what you like to use rather than how it performs, which abstractions are best, etc.

If you can wake up every day and think "Today is a great day for some non-OO programming" and you get to it, adding value to you and hopefully others, who is someone else to look down on that?

> What are we programmers supposed to do with all of our free-time unless argue about what is the holy grail versus which is not?! I feel so empty.

Learn about monads for advanced pedantry points? (I joke.)

After reading the comments, I get the impression we (and by proxy, many people?) don't agree on what OO means. There's a spectrum, including "Using classes / structs", "Not pure functional programming", "inheritance / getters/setters / factories", "everything is a class".

I think what OP means in the case of Joe Armstrong's comment is that Erlang follows the original conception of OO, as defined by Alan Kay, in which code is structured as black boxes passing messages and has nothing to do with semantics. It's true though, it's become such a muddy term.

The funny thing is that Erlang is more OO than any other so-called OO language. Messaging in Erlang is messaging not just another word for a function call with a funny syntax.

I've said this multiple times over multiple years and I'll say it again just because so many green developers get misled.

Object oriented programming is good. Not in absolutely every situation, but for most business applications it's better than the alternatives. The reason it's good is that it is productive. Objects map neatly to records, which pushes global state persistence and all the hairiness of locks to a good RDMS like Postgres. It's naturally normalized, but can be denormalized without much work, and in exchange for bundling state with functions you get easier introspection which is especially helpful in debugging.

These conversations are better done with examples. Say you want to build a social network around videos, like YouTube. Say a user can block another user and, thus, stop them from commenting on their videos.

Would you rather type out this:

    user.blocked_by? other_user
Or this

    is_blocked(user, other_user)
When doing the check? Personally, seeing the the is_blocked? method on a user makes it easy for me to understand. Who has blocked whom is more natural to talk about when there is a primary object subjecting the other object to a test. For all the anti-OO stuff out there, it all boils down to the same thing in the end. For business logic, your functions need to know so much about the data that they are operating on that it's pointless to try to avoid bundling the state. I'm never going to pass a cheeseburger into my is_blocked function so why on earth would I avoid bundling state and operation together? It's like a map of a city trying to avoid listing shops since "maps should be about roads and geography, not shops." A pointless dogma that doesn't actually help programs get built. Most successful startups use OOP and there is a clear reason for that: It's more productive.

Now, do I use OOP when I'm doing data science stuff? Mostly no. I don't need it there. But where it works, it's magic.

I'm not sure I understand you.

    is_blocked(user, other_user)
seems like it’s still object-oriented to me. Oh sure it has terrible syntax, but we’re still designing our application with encapsulation and if this is CL and “is-blocked” is a generic then it’s how you would write this in an CLOS-style OOP — it would just be written (is-blocked user other-user)

On the other hand:

    other_user.id in user.blocks
is what I would imagine a non-OOP version might look like, because it’s not dynamic and not encapsulated. If we were to write it:

    in(other_user.id, user.blocks)

    (in (other-user :id) (user :blocks))
I also still think these functions are better than the object-oriented ones: they’re faster, and it’s clear to the reader if you can get the ids without the object (say by pushing this into your query) you will save a lot of runtime and programmer-time.

I also think if you make decisions based on what is popular (e.g. for startups), instead of actually thinking about it, you are subscribing to dogma. Maybe sometimes not-thinking is good, but it seems possible that what you think of as a “start up” has largely only existed in the short time period that object languages were in vogue. Why do you think this is causal? Do you think if go (for example) becomes popular we won’t have startups any more?

They also force you to reach into the implementation's data structure directly (the "blocks" container), whereas the message-oriented one allows you to refactor without having to visit every call site and also potentially need to rework any other accesses that are doing more than just checking for membership.

Message passing allows much more fine-grained access controls to the implementation, and in many cases that's highly desirable, especially as the implementation gets more complex and the purposes of accessing the data more specific, and excess coupling more problematic.

I'm trying not to speculate about what 3pt14159 meant just because what you said makes sense to me, because I also think it's clear you mean something else.

Or put another way, I think agreeing that message-passing is useful is not an endorsement of OOPL: After all, Erlang supports message-passing.

know any good examples of fine grained access control in message passing systems? sounds compelling for a current project

Message passing is the basic idea behind all method calls in OO languages, and the premise behind APIs to libraries, systems, etc. It's just a formalisation for how to make requests and communicate with another system. Even HTTP is message oriented: You can send GET, POST, DELETE etc to an HTTP server without knowing HOW it will do it. You can't browse the filesystem or read byte x of a file or anything because it's access controlled behind the interface.

It's the same in programs: You have primitives, but you generally hide them behind higher level APIs, or in the case of OO you encapsulate data and behaviour together in one place to (hopefully) make it easier for a human to conceptualise what's going on. OO is no silver bullet; it's just one way among many of organising your interfaces and data, and to make combinable components that increase the expressivity and thus power of your codebase.

In the previous example, client code was looking inside data "user" to check if a particular thing exists in its "blocks" data, but it would be better to work at a higher conceptual level, such as a function/method/message called "has this user blocked user X? (returns yes or no)". Then when someone reads that code later, they can at-a-glance see what the program is doing conceptually, rather than scanning around to figure out what the code is doing with "blocks" and work out what it's attempting to accomplish.

> I'm not sure I understand you. `is_blocked(user, other_user)` seems like it’s still object-oriented to me.

In OO, the object exposes what services it offers.

But in your code fragment, there is no way for a new implementation of `user` to override `is_blocked` - in most languages.

(C++ has ADL/Koenig resolution https://en.wikipedia.org/wiki/Argument-dependent_name_lookup which actually does let you do this generically at compile time.)

> In OO, the object exposes what services it offers.

In modular, especially statically typed, functional programming, modules identify what services they offer, and what data items those services consume, and the concerns of specification of data types and provision of services are decoupled.

> in most languages

I'm pretty sure the language being referenced was Common Lisp, which supports multiple dispatch (in CLOS). In a multiple-dispatch OOP system methods can be specialized based on any argument—this is somewhat like ADL in C++ but more generalized. Ergo, a new implementation of 'user' can override or extend more generic versions of 'is_blocked'.

What you seem to like is infix syntax and overloading on the first argument.

I think this is really the most valuable part of OOP; it makes code read in the order that functions are applied, which is both natural and enables type driven autocompletion.

What bothers me about OOP is inheritance, subtyping and the emphasis on mutable state, which greatly complicate call chains, the type system and temporal reasoning.

Yes, but honestly that is super easy to avoid. We code in Java and pretty much all objects are @Immutable, inheritance is only used when its the right tool for the problem (it really rarely is) and subtyping is solely done through interfaces.

I think the main reason what people get wrong about OOP is that they tend to think everything needs to be models with inheritence, which is completely wrong. Inheritence is BAD!

The rule should be:

1. Avoid inheritance

2. Use interfaces instead

3. If interfaces are not enough, use composition

4. If composition is not enough go through all other similar design patterns that are not based on inheritance.

5. If you really need inheritance, be very careful about which function you make non-final and non-private.

That sounds like you like functional programming with associated functions from OOP and not much else from the GoF book of nightmares.

I mean Erlang is an actual OOP language according to the original creator of OOP's concept of message passing between objects. But each object internally is quite functional.

It's just the Java crowd's deep inheritance enterprise code that makes every developer who survived that want to vomit.

I think most interpretations of OOP are too specific. Interpretations are often about classes, inheritance, polymorphism and stuff like that. But the original idea of OOP was more about message passing, so that you tell an abject what you would like it to do, but have no further influence of how it performs the request (if at all).

>4. If composition is not enough go through all other similar design patterns that are not based on inheritance.

IMO, that seems like too formulaic an approach without enough thinking in that step.

Also, you mention design patterns, but the classic design pattern book nicknamed GoF (Gang of Four, i.e. by Gamma, Vlissides et al), has this guideline prominently on an early page (before the main body of text):

"Prefer composition over inheritance"

, or similar words (from memory).

And that is the only sentence on that page, right in the middle, which could only have been done for great emphasis.

But you say:

3. If interfaces are not enough, use composition

And although you do say to avoid inheritance, aren't interfaces somewhat similar to inheritance, so shouldn't the same GoF guideline apply?

Asking for myself.

My copy of the GoF book does not have that sentence as the only one on it's page, however there are only two guiding principles mentioned in chapter one, the other one being:

"Program to an interface, not an implementation"

and this actually comes before the other one about composition.

Inheritance (i.e. implementation inheritance) creates tight coupling between classes, whereas interfaces exist to prevent that happening, ... they are quite different constructs.

I'm almost sure that my copy of the GoF book did have that sentence as the only one on its page, and as I said earlier, it was on a separate page before the main body of text, i.e. even before chapter one. Maybe we have different editions. Could still be wrong about that, though. E.g. it could been a different book, not GoF, although I think it unlikely. Why I say that I remember it being so, is because I once mentioned this to a startup founder I was consulting to, maybe because he had written some code that used inheritance unnecessarily, and I said it to him in that context, during a design discussion, and suggested we follow the guideline (to prefer composition over inheritance). He was impressed (I mean by the guideline, not me) and we did follow it from then on.

Agree, and would just add that interface inheritance can be useful too and is not the quagmire that implementation inheritance is.

Method dispatch and pipes get you both type driven autocomplete and left to right execution order without all the cruft.

One thing I do appreciate about objects is the natural namespaces and the way methods naturally tend to be located with their type in the actual code.

Object oriented programming is good. Not in absolutely every situation, but for most business applications it's better than the alternatives.

The thing about OO is that as far as I can tell, it's natural. Whatever OO is criticized, people point out that the standard OO example is "car" but the typical OO object in practice is "network credentials permissions". So what? The usefulness of OO is exactly that allows, partially, some totally fungly thing like "network credentials permissions" to be treated, sort of, like a thing, we sort of, understand, a fricken car.

And further thing is, the fp thing seems to be something like "we don't do objects at all but we do really elaborate Turing complete types, please don't ask us what the difference is, you wouldn't understand. Don't ask us whether types have private data. Especially don't ask if integers have private bits..."

The problem mostly caused by people insists use OO in position where is looks not natural nor make sense.

A connection client is suitable to be written as a class. It didn't make sense for outsider to see what the internal connection state is and manipulate it randomly. You want to just call .connect() and happily use methods on it.

But makes a math method be a class with multi constructor parameter just don't make sense, why not just make it a static method or a bare function(If the language supports it) with multi parameter?

And the most worst problem(Though it is not related to this topic). Someone writes pattern from OO language in other language. Even the language itself has the capability to express it directly and gracefully. (personally I think this is why OO haters exists...)

The problem mostly caused by people insists use OO in position where is looks not natural nor make sense.

Another poster mentioned that any paradigm can be misused. Yes but OO has a sort-of "least bad to miss-use" quality.

You can have a date object or a query object or Haskell-function object, and they are all conceptually terrible but they still allow people who only know OO to work with all kinds of things. OO is created to "glue" anything together with anything else. The equivalent with functional programming is "FP is created to "glue" only other functional elements together with other functional elements".

And OO definitely offends the sensibility of those who things done correctly and well - which includes lots of programmers. The "least bad" is indeed an enemy of the good. But you're not going to get all the world's systems redone correctly as the new paradigm so if you want to end OO, you'd need an alternative that could also least bad in many circumstances.

> The problem mostly caused by people insists use OO in position where is looks not natural nor make sense.

You have exactly the same issue with any paradigm. I am seeing it now with functional programming as people go all obsessive into converting everything into it.

And I'd argue than in both cases trying to use a paradigm for literally everything is a good thing, at least in terms of learning and understanding that paradigm.

Is OO good for everything? No. Is FP good for everything? No. And yet, unless you've attempted to use them for everything, you're unlikely to understand why they ain't good for everything.

I think you're falling into a thinking trap. And I think there's probably a name for it but I don't remember it.

The point I will make is that "a car is an object you can do things with" is actually probably not how you think about a car. Personally I think of my car and the "objects" around me as merely tools I can use to achieve goals.

If I need to go to the shop I don't think to myself: I need to go to the car object and call the door_open(DRIVER_SIDE) method and then call the add_driver(self) method and then the door_close(DRIVER_SIDE) method and then the turn_key() method...

When I need to go to the shop I come up with a shopping list (or more likely get my girlfriend to send it to me) and then I think about my destination and how I can get there. I think of using my car to go there.

So simply put, I think the object model has no relation to how we actually think about objects around us.

The object model (At least alan kay's variant) may be a good description of how humans talk to each other, and I think it has its place for sure, but it's also nothing like the object model which gets sold to people these days.

To me "network credentials permissions" sounds like just some data which describes the permissions required to access another piece of data called a network credential. That's how I think about it, I am not sure it's accurate to say that my method of thinking about it is any more or less natural than thinking of "network credentials permissions" as a thing but personally I would say that considering "network credentials permissions" to be a thing seems just outright wrong.

The point I will make is that "a car is an object you can do things with" is actually probably not how you think about a car.

I can't imagine how this isn't exactly me or someone else thinks about a car. I might then think about what I accomplish with a car. And sure, "car" and "bicycle" are "transportation means I can steer" and feed "transport means" and "destination" into an algorithm and arrive a location. Sure, "transport means" is a higher logical level "car" but both of these go into human thought processes.

If I need to go to the shop I don't think to myself: I need to go to the car object and call the door_open(DRIVER_SIDE) method and then call the add_driver(self) method and then the door_close(DRIVER_SIDE) method and then the turn_key() method...

The only reason you don't think you think about that, is because your self-conception apparently delegates the process to your reflexes. But if you have explain to someone how this happens, you reflexively dive and give some English equivalent to your narative above.

> I can't imagine how this isn't exactly me or someone else thinks about a car.

I think it's more likely that you think this is how you think about the car because you think it makes sense to think about a car that way. In reality if I recall correctly when we have a goal of "going somewhere" we don't see "objects" we see "obstacles and tools which overcome obstacles". Specifically you don't see what a car is (a lump of metal which moves with you inside) but rather you see what a car means (a tool which can be used to overcome the obstacle of distance).

Although honestly I can't come up with what to look for on the net to source this information and I currently don't have time to do it.

> The only reason you don't think you think about that, is because your self-conception apparently delegates the process to your reflexes. But if you have explain to someone how this happens, you reflexively dive and give some English equivalent to your narative above.

Even if that is the case, I don't see how an English explanation can map onto an object model. It more closely approximates a procedural model.

To point directly at the problem: Does the car contain me or am I riding the car? How do you describe this concept using objects? When you think of the car as a tool rather than a thing it becomes obvious: The car is a function which transforms some data <me> by changing the position from one place to another. With this approach I am no longer stuck trying to work out if I the car object has to have a add_driver() method or if I should have an ride_car() method.

> To point directly at the problem: Does the car contain me or am I riding the car? How do you describe this concept using objects? When you think of the car as a tool rather than a thing it becomes obvious: The car is a function which transforms some data <me> by changing the position from one place to another. With this approach I am no longer stuck trying to work out if I the car object has to have a add_driver() method or if I should have an ride_car() method.

Both and as someone who has implemented vehicle logic in a game before you usually want to model both.

You’ve done the classic programmer thing of creating an abstraction over the grisly details with ride_car which is great if you’re just representing the fact abstractly but if not someone still needs to care about the details and model them. An abstract version is great then for example if you're planning but it doesn't remove the need to build the details out.

We do go to the door and call door_open(DRIVER_SIDE) - it's just second nature now because we've done it so many times that it takes up no conscious bandwidth. But that subroutine is running somewhere in the brain.

To use a car, we're performing a bunch of actions involving modifying the state of a car. That seems to be quite analogous to the OOP way of thinking.

You're right that we don't consciously think of car travel in this way, but I don't see that as inconsistent with OOP either. Our thinking about cars is highly abstract, as though it's an abstract class/interface with a single go_somewhere(destination) method, which fits quite well with OOP thinking.

But the thing is that now the car has to know about the world. If there's a wall on the driver side how can it open the door?

I'm not dogmatic on OOP vs FP, but let me take the side of FP here, we could model this as the car being a data structure with its position (x, y, etc), and a representation of what opening the door is (width of the car + length of the door, an anonymous function, or a named function, that can be made polymorphic) and now our open_door(car, world) can take both and the world doesn't need to know about what is a car and the car doesn't need to know about the world either, it just serves as a mapping, and the function open_door checks the positions of car in the world, transforms the car according to its procedure and checks it still fits the world and if not it produces an annoying beep.

> Our thinking about cars is highly abstract, as though it's an abstract class/interface with a single go_somewhere(destination) method, which fits quite well with OOP thinking.

So what you're saying is I'm going to have to downcast to a concrete class to figure out if my object is actually suitable for going to the store or my bathroom (my house isn't big enough to drive a car through).

Lines up with most OOP systems I've worked in.

Ideally the interface is named in a way that's self-documenting and makes it clear that it's not referring to moving around your house, no downcasting required.

Damn, you really nailed it with that car example, because I can already see the sort of errors that might arise:

* did not call putFeetInCar before adding driver

* did not call putShoesOnFeet before calling putFeetInCar

* did not call gripKeyInHand or set up HandKeyAllocator

* did not set up HandKeyAllocatorManager or use KeyDashboardMeldConnector to insert key into KeyHoleDoodadInterfaceXConnector

* did not do correct incantation of prayer to Cthulu to ensure AxelAlignmentProtocol is properly specified

Man, this really is the crux of it for me: when the focal point is a series of opaque state mutations, there are exponentially more ways to do those steps in the wrong order than the right order. And the contracts for those methods have absolutely zilch to say about what they assume has already happened or will happen after they're called. You just have to know (or hope somebody wrote it down in English). This is my biggest pain point/source of anxiety when working in an OOP codebase.

And then guess what? Suddenly you're extremely concerned with the internal state of those objects. Because understanding it is essential if you want to have any hope of actually using them. And the great irony is that this completely undermines one of the main benefits of OOP: private state that the outside world doesn't have to care about.

> did not call putFeetInCar before adding driver

> did not call putShoesOnFeet before calling putFeetInCar

That's bad architecture, not an OOP flaw. These should've been private methods called and error-handled properly from driver.prepareForDriving, or even implemented as a DriverPreparedForDriving subtype.

> did not set up HandKeyAllocatorManager or use KeyDashboardMeldConnector to insert key into KeyHoleDoodadInterfaceXConnector

All these sound like bad OOP (or Java) to me, a HandKeyAllocatorManager is not a thing in the domain. I'd rather build my logic around objects like HandsContents, KeyKeyholeCompatibility or SeatedDriver.

I’ve decided to assume this is perfect satire and have upvoted it accordingly.

It's not supposed to be, what made you think that?

I think that the way we think about this has more to do with the purpose or intent and maybe with the language that we speak and it becomes more complex when thinking from self perspective.

For me, for example describing a process where one person gives the car to another person to go somewhere it feel very easy to say

  person1.press car_key.button
  person1.explains person2
  person1.gives car_key to: person2
  person2.enters car
Of course OO is not satisfying completely this but in general I see that nothing happens without someone/object/matter doing something on something else.

So I except that something happens into/to the world because an object.did something to a subject. Else nothing happens. thus it does not feel natural to think first about an action that requires a force and the subject to be aplied to. Always an object is exercising an energy/force onto something else.

To summarize I think if one is a person who thinks more about energy than matter then fp feels natural. If one is a person who things more about matter then oo is more natural.

>The thing about OO is that as far as I can tell, it's natural

How so? If it was natural, it wouldn't take 30 years of programming to arrive to it (and in a bad way).

>The usefulness of OO is exactly that allows, partially, some totally fungly thing like "network credentials permissions" to be treated, sort of, like a thing, we sort of, understand, a fricken car.

Cars have instance variables and methods? You car.drive() your car?

I mean at the right level of abstraction - yes...?

You can represent "network credentials permissions" or "car" naturally in non-OO too though. Non-OO-procedural programmers would probably create a struct representing the credentials and have a bunch of (hopefully namespaced) freestanding functions to operate on it, just like OO has a bunch of methods operating on the class. Functional programmers would similarly represent that as a data structure and a set of functions to operate on it. Furthermore, since the data structures used are usually the core language provided ones, you can use the full collection of the languages library of functions to operate on it to transform, filter or otherwise process the data.

You can do that with OO too, of course, except that encapsulation means you typically do it in the class and if you want to do other transformations, you either need to amend the class, expose the internal data (directly or through getters/setters), or inherit from the class. In a functional or procedural language you can also provide encapsulation/data hiding if you wish.

I don't think either approach is more natural than the other and they're both more or less equivalent, its just that different languages encourage different default approaches.

Personally, after twenty years of programming, I find that I value immutable data over all in most cases (ie unless I'm writing performance-sensitive C++ code) and as long as I have that and a reasonable way of managing it, I don't really mind if I'm representing things as immutable objects or something else.

> The reason it's good is that it is productive.

I'm going to half-agree but rephrase: it's a kind of programming where even a novice programmer can iteratively approach the correct solution by adding more and more code. That's good from a management point of view: you can always feel like you are getting closer and closer to the goal because you get more and more code and fulfil more and more of the requirements. But I wouldn't go so far as saying this is productive, especially not in the long term. For one thing, you'll end up with a lot of OO code which is the worse thing for productivity.

> Would you rather type out this: Or this

That's not the important distinction between OO and functional. The important distinction here is: when you call the function, is it defined by user? Is it defined on some more abstract base_class which also applies to "groups" and "corporations"? If you want to change it, what do you have to do, and which types and behaviors do you affect when you change it? Subclass polymorphism is is almost invariably a net negative in my experience.

The syntax of OO is great. We realized this especially since it grew to fame at the same time as the clever IDE with autocompletion. But your example would look like the first in e.g. Rust, which isn't an OO language by any stretch of the imagination.

> But I wouldn't go so far as saying this is productive, especially not in the long term. For one thing, you'll end up with a lot of OO code which is the worse thing for productivity.

I really dont see why either of these would be the case.

Your argument for why OO is good is that you prefer one syntax over the other? The examples you gave are just syntax and have nothing to do with OO.

You could imagine a language that isn't OO but where you could define the . operator to dispatch certain functions with the same syntax as OO.

> Most successful startups use OOP and there is a clear reason for that: It's more productive.

Evaluating programming techniques based on what startups use doesn't sound like a good idea to me because 1) Codebases in startups usually aren't the best examples of quality and cleanliness and 2) optimizing for productivity above anything else has the potential of failing in the long term in a lot of settings (including startups).

I think you have your own definition of OO going here. Maybe a better name for it would be db data centric programming? You can use a class based object system to do it, but it's not needed or necessarily especially well suited for it.

I agree positional arguments can be bad in a case like this. OTOH you can just look at .is_blocked_by? as infix syntax here which just works in many languages. The syntax problem in the example can also be solved eg by using named arguments. (But to reiterate record.function(x) syntax doesn't need an OO system either, or even language support if you have macros)

> Objects map neatly to records

This is plain wrong. The problem of mapping records (presumably you mean database records which are actually tuples but lets call them records) in a relational database to objects in an object hierarchy is the source of the "object-relational mismatch" wherein the fundamentally graph-based structure of an object model is intrinsically incompatible with the related sets-of-tuples structure of a relational database.

To try to solve this problem, an inordinate amount of time has been put over many years to develop various ORMs (Object-Relational Mappers) which try to hide this intrinsic incompatibility under many layers of abstraction.

A whole new industry came out of the simple fact that people who had been lead into thinking that OOP was the only solution to any problem wanted to use relational databases to back their objects.

These ORMs all come with innumerable caveats and shortcomings. Their abstractions constantly leak and their performance is a constant firefight between making code idomatic OOP and making code actually perform in a reasonable way.

To claim that objects map neatly to database records is completely ignorant of even the history of how ORMs came about.

Your comparison of `user.blocked_by? other_user` to `is_blocked(user, other_user)` is a complete red herring anyway. Nothing stops a non-OO type system from letting people specify that a function applies to a type such that you can write `foo.is_blocked_by(bar)`. Nothing also stops you from writing code more explicitly and clearly such as: `foo in user_blocklist_of(bar)` or even mixing both concepts and writing `foo in bar.blocklist()`. In summary, these are all questions of syntax and style and have nothing to do with OOP.

> The reason it's good is that it is productive.

This implies that somehow all other methods of programming are less productive?

When I have the rare occasion where I need to write some kind of web application I reach for three things: Flask, SQLAlchemy Core and sqlite3. I design the database then design the application around it. By actually directly interacting with sqlite3 using a query generator like SQLAlchemy Core I get control over and the ability to use the features of my database of choice. I am no longer stuck trying to fix query performance issues by randomly mangling an object model until the resulting database interactions act like I want them to.

This to me seems far more productive than any of my experiences dealing with ORM performance issues and abstraction leaks.

> Who has blocked whom is more natural to talk about when there is a primary object subjecting the other object to a test.

I think there's nothing about this particular example which puts particular emphasis on either side of the block as the "subject" and the other side as the "object".

- a blocks b - b is blocked by a

There's nothing about either of these which stands out as the correct mental model.

In reality it's not a subject-object relationship it's just a relationship between two subjects.

In a database this would be stored as a table containing a 1:many relationship between user and user.

By attaching this relationship strictly to one side of the relationship you've actually ignored the case where it may be useful take the perspective of the other side of the relationship.

If you end up factoring that into your object model you will presumably end up having two functions: `.is_blocked_by` and `.is_blocking`. These will likely have to have two separate implementations (although I'm sure you can get the ORM to handle that for you). (Also, don't forget the performance impact that you're likely dealing with WHOLE user records at once and relying on some lazy loading specification which you probably defined too loosely to be useful to avoid loading the entire "other user" from the database just to check if you're blocking them.)

I think now it should be clear that the single separate function approach is actually far more representative of the reality of the relationship.

The former isn't how most OOP languages do it and those two options aren't the only two options.

Personally, as someone who mainly uses Clojure, I would rather type:

    (blocked-by? user other-user)
With that said, it really doesn't matter which of these options I have to type, because as usual, the syntax is less important than the semantics and all of these options can work with or without OOP. Now, OOP isn't bad per se, its just a tool. It depends on how you use it.

I do buy into the Clojure notion though that there is a problem with most OOP languages and that is that they conflate multiple concerns and features into objects. Give me the features to mix and match as I please, instead. That is, if I want to keep my data and my functions together, great, but if I don't, also great. If I want to do polymorphism with objects, great, if I want to do it without, also great. Let me choose, based on the problem I'm solving.

I tend to favour a functional-first approach, but will use OOP as it makes sense or when it makes the API easier. To me, immutable state (where possible) is more important than an avoidance of OOP. I also write a lot of C++ and I won't shy away from creating some classes where it makes sense.

> For business logic, your functions need to know so much about the data that they are operating on that it's pointless to try to avoid bundling the state.

This hasn't been my experience at all working with Elixir. A typical Phoenix app will have hundreds to thousands of funtions related to business logic and none of them bundle state.

It works fine and debugging is much easier than it was for me previously with JS or Ruby because of the pure functions.

Who has blocked whom is more natural to talk about when there is a primary object subjecting the other object to a test It is more natural when you're already used to OOP, but like any language, each have its syntax which feels very natural for its speakers. Comparing the structure of a similar sentence in Japanese and English, would you affirm to the japaneses "the english sentense is more natural"? I don't think they would agree. So your point about syntax is irrelevant. If you practice a language and love it, you'll embrace its syntax and it will feel very natural.

Objects map neatly to records Well, some data structure maps even more neatly to records, no? {:id 1, :name 'bob'}

Most successful startups use OOP and there is a clear reason for that: It's more productive I prefer not to waste time arguing with this, but programming is not about "successful startups".

> Objects map neatly to records

FP data items (also, often called “objects”, but with different meaning) do so at least as well as OOP objects. Better in many ways.

> which pushes global state persistence and all the hairiness of locks to a good RDMS like Postgres.

OOP obscures whether an entity is an immutable data item extracted from an RDBMS or a mutable local thing or a mutable thing which mutating also effects an external resource like a database, but there is nothing about FP which prevents, compared to OOP, pushing “global state persistence and all the hairiness of locks to a good RDMS like Postgres.” In fact, the opposite; there's a reason no one talks about a Functional-Relational Impedance Mismatch.

> It's naturally normalized

No, OOP is not “naturally normalized”.

Ada, CLOS among others use the second syntax for OOP.

It's clear the GP has never worked with multiple-dispatch languages.

Syntax is separate from programming paradigms. Some languages allow you to write any function infix style, like

  user `blocked_by` other_user

This is one point. The other point is hiding in real life usage.

User is derived from human, that from lifeform, that from cell,dnk,(...) each of those is derived from molecule, that from atom and then goes into quantum physics parts. And is_blocked is implemented on higgs bozom, which has method is_blocked and in case of error throws it from there. Surely is_blocked is also implemented on all other levels, each adding just a small information to the result. /s

OO is elegant. Even C++ operators overloading is great (MFC CString is a nice example). The problem is that people who think what they are doing based on what they have learned in school or some "pattern" (with not enough mileage - I am talking about 15+ years that you have spent doing OO monstrosities before you reached the enlightenment) don't understand that you don't need higgs bosom in user object. And that someone searching for an issue in this architectural beauty represented by user object is going to start scratching his head when is_blocked is called and catches higgs_bosom::exception. Even more when encounters god_factory.

Once someone figures out that there is a need to understand the universe to be able to handle a bug in otherwise simple is_blocked function the obvious and rational reaction is screw OO. But the problem is not in OO but rather in people using it to express their philosophy of life that needs to be infinitely ad absurdum reusable,...

I have seen this everywhere, from most prominent libraries for C++ to java where this is seen at its worse. I dont dislike OO. I dislike people making lasagnas. Unscrupulously patternizing everything and everywhere for the sake of doing it.

And I havent even started with patterns, refactoring once you figure out that scope has changed,... will rather use an example: https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...

At the end. I love OO. But I prefer a language where you cant inherit. Cant throw exception. As this prevents people from over complicating the code for no reason and helps me to not read their whole epic architectural poetry to use a simple function.

Face it. Most programmers cant code. They are terrible at doing it. And the more powerful tool that they get, the more they will abuse it and not even know they are doing it in most cases.

I find the hardest part about OO is that a lot of people don't know what good OO looks like. Most new developers fall into the pragmatic trap and shy away from refactoring because of startup culture cargo culting.

Good OO requires large refractors on the regular, and with a good design, refractors like this are low cost high return, but failing to do them results in mud eventually.

I used to shy away from it "because nobody can do OO so it's better if we just don't".

Writing everything in procedural and functional is quite productive, but it too requires adherence to rules and avoidance of mutable state.

Once I realised, there are no shortcuts, you just have to learn what works and what gets you into trouble later I tried OO again, only now I use it sparingly and tend to mix procedural, functional, and OO depend on the task at hand. One of the amazing things about PHP is that you can do this easily and clearly. Things like go, java, or even JS these days tend to force you into one way or the other.

Earlier on I aimed for plain objects / structs for information, grouped state passing but I realised these often lead to smells that are hard to clean up. Now I limit their usage to the edges and opt for objects instead. Passing implementations along with the state really cleans things up in coherent ways.

I think one of the scariest things about good OO is that it looks like bad code to me. It looks "not dry". It looks, untidy, lots of state sharing and member variables to track things. From a functional perspective that stuff gets you into trouble. But from OO perspective its what works. The trick is keeping classes small enough to be aware of all the member variables when reading any member function. You shouldn't get blindsided by random state changes elsewhere, that is a sign there is more than one thing being done by that class.

> Earlier on I aimed for plain objects / structs for information, grouped state passing

I’m a relatively junior dev and I believe this is a pattern I use a lot. Would you be willing to elaborate on what you mean by it and what some common pitfalls are?

It's probably my default "get things done" style of code so I don't want to suggest it's something you want to actively avoid.

Plain objects have the benefit of OO in that you have a bunch of normal friendly objects that map well to a database record. But they tend to be treated more like variables. They work well for groups of functions that need a lot of the same parameters but you don't quite want to lock in an object boundary.

Often I'll start with a plain object or plain functions, then once I can see the seams I'll form objects. Sometimes you can see these up front as obvious, I still try to hold off. The longer you wait the better and more concise your objects will be. You want you objects to look more like behaviors than records. (Though there is heavy cross over, plain objects / structs make good records and you can easily consume them in objects.)

The big pitfalls:

Mutation, avoid it where possible, mutation is a sign you need an object. Immutable structs are bliss to use, they're easy to reason about. Mutable structs are basically impossible to reason about, never know what's in there.

Using an object to pass a single variable to a function.

Using an object to pass a single variable / subset to a sub function call.

Basically, try to terminate plain objects at boundary edges like you would nullable variables. There isn't anything wrong with passing an object around for one variable, its just a bit messy and creates a bigger dependency than required. It makes it hard to refactor your plain object out later because you can't tell what is actually depended on down in the stack.

>I find the hardest part about OO is that a lot of people don't know what good OO looks like.

Aside from Bjarne Stroustrup, is there any computer science major figure, great dev, etc., that thinks there is such a thing as "good OO"?

Kent Beck, Martin Fowler come to mind. Sure Alan Kay thinks the same too.

Though I don't really understand the question. Whether OO is good or not is a different question, and I'd posit the more important one.

>Kent Beck, Martin Fowler come to mind. Sure Alan Kay thinks the same too.

Alan Kay? Alan Kay has had the worst to say about OOP as its known ("but there's a different good paradigm, which real-world OOP devs in C++/Java/etc might not use, but Alan Kay had in mind and Smalltalk offers" is not really a rebuttal").

>Though I don't really understand the question. Whether OO is good or not is a different question, and I'd posit the more important one.

And we can't divine the answer to that question.

But we can see what the luminaries have to say about it.

I'd rather follow the good examples/advice (of those that did major practical work in CS) that take some "objective" (how?) answer to the matter as gospel.

Bertrand Meyer, creator of Eiffel: https://en.wikipedia.org/wiki/Bertrand_Meyer.

Plenty of books to read from, but I guess many don't remeber that thing exists, just read a couple of Medium blog posts as learning process.

Medium blog posts are mostly trash. They all seem like they are written by junior developers.

Not sure what is meant here.

That there are plenty of books teaching OO programming?


Is there any major CS figure / language developer etc that thinks OO is a good idea in 2021 was the question.

Not whether some O'Reilly or Manning or whatever author has some book about it...

I don't know, Anders Hejlsberg, Yukihiro Matsumoto, Guido van Rossum, Brian Goetz, Mads Torgersen come to mind.

And even the cool kids that got two releases yesterday, Rust and Julia, do support OOP, so I guess we can include their core design teams as well.

Well, I'll give you Guido and Matz, which started their languages in early 90s when OO was all the rage and stuck with it.

Anders, Goetz, and Torgensen on recent years have all talked about the need to add functional features into the languages that started as OOP (and is what they have been doing with C#, TypeScript, and Java since 1.6 or so). One also suspects they made them OOP because they did them on comission and the goal (of Sun, MS, etc.) was to attract devs with what was considered hot at the time...

But neither Rust nor Julia support inheritance, so if their style can even be called OOP, it's not the OOP of the kind that Joe talked about where you "get a gorilla holding the banana and the entire jungle", and which most devs know from C++, Java, and yes, Python and Ruby.

Inheritance is not a must have in OOP, as plenty of SIGPLAN and ECOOPs conference procedings report.

So yes their style can in certain cases be called OOP.

Just like being a multi-paradigm language, with support for OOP, doesn't magically make it non-OOP.

OOP maps neatly to records is RDB mostly in CRUD applications. The reason is ORMs build SQL queries but SQL does set operations. OOP does not map neatly to set operations and that’s one of the reasons ORMs would always generate inefficient complex queries.

There are also languages like Nim which have a method call syntax, that means:

myMethod(myObj, myParams) == myObj.myMethod(myParams)

Relevantly, the same thing applies in Python, which is probably where Nim got it (though I don't recall if Python actually lets you interchange them - only that the thing on the left side of the dot is the first argument of the method).

Perl also does this (but with -> rather than .), and also allows interchanging the syntax.

Lua is another one. The following expressions are all equivalent:

  object.method(object, arguments...)
  object["method"](object, arguments...)
Lua objects are tables (associative arrays), the dot and colon operators are forms of table lookup, and methods are ordinary functions stored in the table (or more commonly its associated meta-table) which happen to take the object as their first argument. The colon operator, which can only be used as part of a function call, passes its left-hand operand to the function along with the other arguments.

D also has this feature. Uniform Function Call Syntax.

In the second example, I would do instead:

    if (user.blockingId == otherUser)...
Even if there is some scary logic you would argue you need to hide, I'd go further to say you have a UserLocks module so you can do:

    if (UserLocks.getUserBlockedId(user) == ....
Kind of a hard example to work with because I'm not sure why a user would block exactly one other user, but yep.

You gotta stick the state somewhere. OO isn't the panacea it was sold as, but it's worked out pretty well for a lot of software.

Still, this quote is hilarious:

"Object-oriented programming is an exceptionally bad idea which could only have originated in California." --Edsger Dijkstra

"You gotta stick the state somewhere."

I like this a lot as I run into so many arguments about stateless this or stateless that, and what gets lost is that not contending with state is problematic. Databases are forced to get massive and do all sorts of goofy shit so some people don't have to ever think about state; this goofy shit creates interesting reliability challenges which translates to bad user experiences.

For my future projects, I've decided to build my own application-as-state-database-container thing: http://www.adama-lang.org/docs/why-the-origin-story

Granted, I'm focused on niche board games since they redefine complexity.

>Databases are forced to get massive and do all sorts of goofy shit so some people don't have to ever think about state; this goofy shit creates interesting reliability challenges which translates to bad user experiences.

Huh? Databases don't do anything "goofy", they implement ACID guarantees.

Which absolutely beats the untold horros of Chtulu proportions that would emerge if programmers had to implement state management at the persistence layer for multiple clients themselves (they'd implement a DB + ACID poorly and in an ad-hoc way, full of holes).

What people seem to forget is that relational databases are not the first technology we had for periststent state. The first versions were custom solutions per app, application-as-state and various ad-hoc such databases with their own formats and techniques, which were such a hellist landscape, devs jumped at relational databases with utter joy...

> Huh? Databases don't do anything "goofy", they implement ACID guarantees.

I think that he meant that functionality that should not is pushed into database. I have definitely seen project full of crazy triggers and sql procedures and so on.

So, databases don't do anything goofy, but developers sure do.

Especially when scale (real or not) is in the picture.

I'm a fan of databases myself, but I am also a fan of document stores as well.

> You gotta stick the state somewhere.

This is true, and it’s the primary reason I’m not finding some way to work in Haskell. But it’s also kind of a cop out.

The issue isn’t just that state exists and will do well beyond our extinction. It’s not even just about how you manipulate state or how many restrictions you put on mutability. It’s also about the visibility of state and stateful processes, how that’s signaled, and the conventions that fall out.

This isn’t something I fully understood until I worked with Clojure. Immutable-by-default was a fantastic constraint. But more valuable to me was that where state is accessed or changed, you can tell immediately.

Just having that gives you several insights:

- This code is more likely to be impacted by outside behavior

- This code has additional concurrency requirements

- This code may be unnecessarily complex

- This code may be hard to understand a week or more later

- This code is documenting its implicit dependencies

Since I’ve moved on from Clojure I’ve mostly worked in TypeScript, and I’ve done my best to apply the same principles.

In some ways it’s a loss: you can’t signal stateful access with @ or ! or *.

But that’s all convention. In other ways it’s a huge win: if you’re trying to write functional code in TS, you eventually end up with the idiom that async/await and Promise types are generally signals of state. And the type system calls that out much more reliably.

All of that said, you do gotta stick the state somewhere. But where and how you do is the difference between eternal pain and eternal much-less-pain. And coupling state with functions on objects is definitely more likely to produce more eternal pain.

> it’s the primary reason I’m not finding some way to work in Haskell

Ok, now I'm curious about how StateT fails it.

Only that I didn’t know about it! I’ve only admired Haskell from afar.

It's not about eliminating state it's about hiding it. OOP hides state, which means mutability is often shared and hidden.

That's what college textbook OOP taught us. It only took me a couple decades to figure out that college textbook OOP taught us wrong. If all you're doing is hiding state, it's like sweeping dirt under the carpet - you're skimping on your work now, in a way that will only create more work later.

Good object oriented design goes much, much further. To quote Alan Kay, "Doing encapsulation right is a commitment not just to abstraction of state, but to eliminate state oriented metaphors from programming." (emphasis mine)

That Alan Kay quote is intriguing to me, I haven't heard it before, and I'm not sure I understand what it means.

I found that it's from this piece, although a brief skim I'm not this piece gives me more guidance about how to do that, but I plan to spend more time with it.


If anyone has other articles to recommend on this concept, please.

That's the paper I pulled the quote from.

Reading it, and learning Pharo, and reading some Smalltalk code, was, for me, one heck of a revelation.

Yeah, Alan Kay once likened OOP to biology saying the ultimate embodiment is replicating “messages” between cells.

Once I heard that, it makes natural sense that Combine became a first class library in Swift, giving the code base ion channels so that messages just aren’t sprayed everywhere with NotificationCenter or requiring you to name each individual “protein/hormone” message in your program.

> Alan Kay once likened OOP to biology

once? it's Alan Kay's only speech!

Well yeah, obviously Kay and Armstrong agree.

Not only that, but it’s pretty clear the philosophical roots of OOP are basically just FP with a particular organizational model. The idea was never “everything is mutable and gets to mutate everything else unless you say otherwise”, it was about organizing functions (real functions) with the specific data they operate on. The big blob of imperative programming that got bolted onto that is a disservice to the idea. But unfortunately that cat’s been out of that bag for decades and isn’t going anywhere.

> Not only that, but it’s pretty clear the philosophical roots of OOP are basically just FP with a particular organizational model.

That's true of much of OOP, but implementation inheritance is the elephant in the room - it can't be modeled or understood using a pure FP model, because the combination of late binding and so-called 'open recursion' requires a dispatch step on all virtual method calls (that would in turn be modeled in FP via a "tying the knot" trick). The resulting semantics are extremely tricky, and most practitioners are aware of the problems with them (see "fragile base class").

Some recent programming languages have abandoned implementation inheritance altogether, e.g. Rust. IOW, they're really more like FP-semantics languages with OOP-like syntactic sugar on top.

While you’re of course right that implementation inheritance can’t commonly be modeled in FP, it can be used in multi-paradigm languages with explicitly functional semantics devoted to it (eg Swift’s take on structs), and its purpose can be handled in FP languages either by data structures providing consistent interfaces into their constituent parts (exceedingly common in lisps) or by static types with similar consistency (exceedingly common in MLs).

In the latter cases it’s not inheritance in the sense that one thing derived from another, but that all the things you’d model that way are derived from broadly compatible base types.

Aside: Rich Hickey’s introduction to Clojure made all of my lisp anxieties vanish when he illustrated the syntactic difference as primarily moving a function call’s open bracket before the function name rather than after. It’s silly in hindsight, but it helped me feel more familiar at once and ready to learn the rest.

Un-aside: I think this kind of exercise would be valuable in translating functional-ish OOP (state is isolated but largely operated on with stateless functions) to its FP syntactic equivalent.

For example in Clojure, in Clojure you could syntactically rewrite (map some-hash some-fn) as a map method on a class instance and it’s just moving some punctuation around.

Another one which is probably not very mind blowing here, but did tickle my brain when I recognized it for what it was: before the great concurrency upheavals over the last decade+ (eventually settling on Promises and async/await), Node had monadic Either/Option types as a core part of its interface (you just destructure them as callback arguments).

None of this is meant to disagree with anything you said of course. Just wanted to add some “if you’re approaching state the same way there’s a representation in your environment” flavor to the discussion.

My claim was specifically about implementation inheritance. Providing 'consistent interfaces' falls under interface inheritance (which is non-problematic), and 'inheriting' static data types only is semantically indistinguishable from composition. The problem with OOP is also how it conflates patterns that have very little to do semantically with one another as one overly general thing ('inheritance!') even though the general case is not really useful.

I understood your claim for what it’s worth. That’s why I cited Swift structs. They’re objects in the historical sense, functional in the immutable sense, and provide explicit inheritance semantics. The rest of my response was about achieving the same purpose without implementation inheritance.

I didn't think Swift structs could participate in inheritance.

I stand corrected! I thought I remembered this from reading the Swift docs day one. So much for my long term memory.

And indeed it's not new for people to argue that implementation inheritance should be left out of OOP, it's been going on since the early days.


However, especially (but maybe not only?) in a dynamic language like smalltalk or ruby, you can simulate implementation inheritance pretty closely with just composition and (some kind of automated/macro'd) delegation if you want to. I'm not sure how/if that changes things at a theoretical/formal level.

Abadí and Cardelli's ς-calculus is a pure FP model that's almost nothing but implementation inheritance, and it's the best starting point for modeling those semantics, at least if what you want to do is design a type system for an OO language.

The basic rewrite rules are almost as simple as the λ-calculus; as I remember the notation, it looks like this:

    e.m → body[e/i] where m = ςi: body occurs in e
    e{m = ςj: c} → {stuff, morestuff, m = ςj: c} where
        e was {stuff, m = anything, morestuff} and
        there was no definition for m in stuff or morestuff
The expr[replacement/variable] notation implies not only replacement but also α-renaming to avoid variable capture just as in the λ-calculus.

You can, for example, define the Boolean values true and false as respectively

    {result=ςd: d.iftrue, iftrue = ςe: 37, iffalse = ςe: 5}
    {result=ςd: d.iffalse, iftrue = ςe: 7, iffalse = ςe: "yer mom"}
and then if you have some unknown Boolean value b you can compute a conditional as follows:

    b { iftrue = ςf: "hooray!", ifffalse = ςg: "aww" }.result
Similarly you can define list node prototypes cons

    { null = ςx: false, car = ςx: 17, cdr = ςy: 72 }
and nil

    { null = ςx: true }
where true and false are the Booleans given earlier. Then you can define, for example, a length function

    { result = ςy: y.argument.null {
        iftrue = ςw: 0,
        iffalse = ςz: 1 + y { argument = ςa: y.argument.cdr }.result
      argument = nil
assuming you have a suitable interpretation of "1 + expression". And, if not, you can rewrite that to something like one.plus { argument = ... }.result, with a Church-numeral-like construction if you're really enthusiastic.

I think the ς-calculus is a lot more ergonomic than the λ-calculus in practice, and I've written things like string-parsing code and vector arithmetic libraries in it, or rather in a programming language I implemented called Bicicleta, which is a thin layer of syntactic sugar on top of the ς-calculus, so you can write things like foo(bar, baz) instead of foo { argument1 = ςx: bar, argument2 = ςy: baz }.result and 3 + 4 instead of 3.'+' { argument = 4 }.result. But it's just syntactic sugar.

I'm still not sure if this was a good idea because I'm really skeptical of whether inheritance at all is a good idea. But if it's a bad idea, it's not because it rules out having a pure FP model or even makes it extremely complicated. It's already common to augment the λ-calculus with things like records, arrays, algebraic data types, even generalized algebraic data types, and Haskell-style typeclasses, any of which add a great deal more complexity than the tiny increment in complexity added by using the ς-calculus as a basis.

I'm ripping a quote from another HN user the last OO/FP discussion I read on so apologies if you see this and are like, "Hey! I said that!" buuuut:

OO says: "State is hard—let's hide it!" (er, sorry, "encapsulate it") FP says: "State is hard—let's isolate it!"

I think languages like F# and Racket get it right, because multiple paradigms are available to you.

I like F#’s stance on functional-first programming. There are times in which you want to expose the underlying types and there are times you do not. When I recently started a ray tracer implementation, it was a good example of this. The vector, color, point, transform, world, camera, etc. types where all readily implemented by records and discriminated unions which properly isolated but exposed the types. However, for the matrix library, I chose to use F#’s Array.Parallel library and thus a 1D array as the underlying implementation, and this was a perfect use case for using a class. I wanted to hide the implementation of the matrices from the user of the type and encapsulate the internal behavior, only exposing a clean API. Even in that case, the matrix type was immutable because the operations on matrices would simply return new matrices. I think F#’s acceptance of multiple paradigms is really the way to go.

Probably "State is hard—let's eliminate it!" is better if it is possible. Lots of states just shouldn't exist at first place because they are clearly derivations of some other state.

Encapsulation has nothing to do with hiding stuff. It is the idea that an object is effectively defined by its externally observable behavior.

I was being cheeky—I regret it.

Structs and classes work fine for that. You don't need getters/setters/inheritance/factories etc.

Getters and setters aren't necessary to OO. They're one OO-compatible way to implement data access control. Inheritance is not about where you store the state, but behavior polymorphism. Factories are also not about where to store state, but about dependency injection. (Arguably they are not the right way to do it but one thing we can say for sure is that they are not there to solve the problem of where to store state.)

Not only are getters/setters not necessary, they actively work against good object orientation. The whole point of OO was to work with things in terms of their _behavior_ and not their _data_. The way OO is taught by creating data-centric classes has misled generations of developers into thinking they understand encapsulation by making a variable private and then providing a getter & setter for it.

I spend a good part of my time training Java developers away from this idea in order to help them make their code more testable and understandable. Queries and Commands are not just pedantic alternate names for Getters and Setters: thinking about objects with respect to what you can ask it to do (Command or Request) and what you can find out from it (Query) significantly improves the code that gets written.

The purpose is ergonomics essentially. Technically you don't need a high level language or even assembly and could work just in machine code with all of the other organizational description of it documented elsewhere.

The real question with all features is "Does it make a given task easier, better, or possible Y/N?" Said question is incredibly vague by design and should differ by situation.

Getters and setters:

1. Are only conventions. Nobody's forcing you to use them.

2. Simplify refactoring.

3. Are a necessary layer of abstraction any time that 'setting' a value requires updating a related value.

#1 and #2 are only a manner of taste and implementation-specific use cases of tooling built on top of a language. #3, however, is absolutely necessary, to avoid entire categories of business logic errors.

Just imagine if updating a string's value required you to also manually update the string's length. How many string-length related bugs do you think that kind of adventure will result in a typical program?

The very concept that you can change a string's value or length is an absurdity. A string is a value. If you want a different string, form one.

I don't say that a string has to be mutable, I am saying that when you assign a value to a string variable, such as during its first (and let's say 'only') initialization, it would be absolutely insane to also have to set the string's length.

Arguing against setters is like arguing that

    String s = "foo";
must also be accompanied with

     s.length = 3;
The two operations make zero sense to be done separately - doing so is just a minefield of bugs. The purpose of a setter is to combine similar inseparable operations together.

A trivial pass-through setter obviously provides ~zero value, but it also carries with it ~zero cost. Any compiler worth its salt will optimize its invocation away.

> I am saying that when you assign a value to a string variable, such as during its first (and let's say 'only') initialization, it would be absolutely insane to also have to set the string's length.

You should have a constructor/factory for strings that will form/instantiate the string in a valid state, with its value matching its length. That's not the same thing as a setter that mutates the string in-place.

getters, setters, and factories allow the implementation to evolve without breaking client code.

That only matters for interfaces which you expose to clients. The real problem with getters & setters & such is when they gum up the internal workings of a codebase, abstracting over nothing and complicating things unnecessarily.

I agree with that. If you create accessors as a matter of course, then you are mostly adding dead weight. But accessors can add a useful layer of abstraction at the interface layer, as you say.

C# allows you to define getters and setters after the fact without breaking client code.

  foo.bar = x; // could be normal assignment or a setter.

not quite! a property is a pair of methods. if you replace a member with a property, it does remain code compatible as you describe, but breaks the ABI. so if you do this in a published library, clients require a recompile. for this reason, they recommend:

1) different naming convention between members and props

2) dont expose members; use properties from the start for anything public

to assist with two, they introduce the default get/set implementation, like so:

public object Derp { get; set; }

for when you want to expose an interface with the assignment idiom but dont actually have any interesting implementation

>You gotta stick the state somewhere

The state is just data structures. You don't need to stick it anywhere aside from some namespace.

You can then have functions operating on those data structures and be fine.

For protection/privacy just make sure access to the data structure instances is not global.

I read the quote first and I thought you meant California when you said "the state".

And it still amusing how intensely Alan Kay trashes him.

In FP, the state got passed around in the function parameters and the return value. Oof.

It's not the state that's the problem, it's the state being bundled with the code that operates on the state.

Why shouldn’t they be bundled together? They’re completely interdependent - each is useless without the other.

Namespacing them together is good. Hiding the coupling between them isn't. If a function makes use of some data, you want to see where that data comes from, not have the function fetch it via a magic portal to some other part of your code.

Well the common FP answer is that data might have many operations on it that are totally orthogonal and that don't depend on shared behavior. I say this as someone who thinks both paradigms have something to offer.


This is probably an unpopular opinion but I really can’t wrap my head around the arguments for FP and against OOP.

By the end of the day, regardless of which paradigm you choose, you’re still just defining concepts, and every concept has properties (both externally visible or not) and functions. The problems raised against OOP always seem to be a problem with the average programmer’s lack of understanding of ontology, so then they put structure and function inside concepts where they shouldn’t be. This lack of understanding of ontology is why, in my opinion, there’s equal opportunity for both paradigms to write equally ugly and horrible code.

I mean, I’ve seen people use RxAnything in a modern, expressive programming language that allows both OOP and FP, and still they ended up defining massive view controllers, incomprehensible interface names, extensions that apparently exist but aren’t visible to the programmer from anywhere, property assignments that apparently invoke functions under the hood, etc. If programmers write horrible code with OOP and they still write horrible code with FP, maybe it’s not the paradigm that’s the problem, and maybe it’s the common denominator: the programmer.

> you’re still just defining concepts, and every concept has properties

Sure; but lots of concepts (maybe most) are best expressed as either functions or bare data. OO isn't "bad" - but its massively overused. Lots of programmers (and programming languages) consider it the default abstraction, even when it doesn't make sense. Encapsulating state within a class, with lifecycle methods often makes code more complex, longer (~50% more LoC is common), slower (due to heap thrashing and the inability to vectorize operations), and harder to reason about.

For example, modern javascript has a built-in library for converting binary data to text (either UTF-8 or other formats). This should be a method decodeText(arrayBuf, 'utf-8'). But the API designers instead made a class you need to instantiate first. ( d = new TextDecoder('utf-8'); d.decode(arrBuf) ). This is a strictly worse API, which invokes awful questions like "Is it expensive to instantiate?" "Should I cache it?" "Is there extra hidden state in the text decoder?"

I agree that people can write awful code in any language. I've certainly managed to make messes at one time or another in just about every language I've learned well. But my personal dislike of OO comes from the exhausting, never ending fights I've had trying to convince coworkers to use simple functions and bare structs when appropriate. And to please stop trying to force every program, no matter the language, to look like bloated Java code.

> Sure; but lots of concepts (maybe most) are best expressed as either functions or bare data.

That’s begging the question, isn’t it?

> OO isn't "bad" - but its massively overused. Lots of programmers (and programming languages) consider it the default abstraction, even when it doesn't make sense.

I think that this really drives home my point that the problem is with programmers and their lack of ontology know-how.

Regarding your example on text decoders in JavaScript, you really should be asking the same questions about the functions that you are invoking, too. Just last month there was a huge article here about sscanf taking quadratic time. That could happen to any of the functions that you are invoking.

Again, languages from both paradigms equally allow for bad code to be written. I really don’t think that the debate should be at the paradigm level. We’re better off debating about specific policies in code style.

> Just last month there was a huge article here about sscanf taking quadratic time. That could happen to any of the functions that you are invoking.

I don't really know how that would have been made better or worse by OO. The issue there came from a lack of transparency - its not obvious that sscanf takes time linear with the size of the input string. (FWIW, I didn't know that either).

But both OO and functional approaches can obscure details like that. So it seems kind of tangential.

> We’re better off debating about specific policies in code style.

I don't think good taste can be enforced at a code style level. The problem we have is that all problems can be solved either with OO or FP, or with actors and so on. Its much easier for programmers to learn one style and apply it everywhere, rather than learn all the different ways of expressing something and apply each one only when that approach is appropriate.

Is this what you mean by 'ontology'? When I hear that word I imagine the question of "What is the ontology of data types in our program?" - "We have products, and orders, and users, and ...". Which usually presupposes classes even when they're the wrong tool. Thinking about ontologies is a leading question, and it'll lead you to a different program from classical structured programming's question of "What does your program do". Or FP: "What does your program output, and where does that output come from?" Or dataflow programming: "How does the data move through your system?"

Programming isn't an ontology problem. Its an expression problem - how do you express your program to the computer? I still don't know how to say that in response to an overly OO PR. "Thanks for the change, but this is an inelegant expression of the problem you're solving. Can you please make the data flow primary and obvious in your code, and the ontology secondary?". That doesn't tend to go down well. Even when they're open to it, I get a lot of confused responses. "Huh? Without classes? How would you even do that?"

> Again, languages from both paradigms equally allow for bad code to be written. I really don’t think that the debate should be at the paradigm level. We’re better off debating about specific policies in code style.

This perspective is actually important. When you look at OOP it is a paradigm but not a well defined one. You certainly wouldn't call it a policy.

FP (pure functional programming) however is much closer to a policy in the sense that there isn't so much to argue about. You can see from a piece of code if it is "proper" pure FP or not, whereas for OOP, it is easy to do things that are claimed to be "OOP" by half and claimed to be "improper OOP" by the other half.

And depending on how you define OOP, it is actually completely orthogonal to FP. Just like OOP puts mostly helpful restrictions on how you can model your problem in code, FP puts additional mostly helpful restrictions on it.

I happily use fp+oop in scala and i agree with what you say about rxjava and also most scala code is bad

the benefit of FP as a practice is more modular functionality. imperative code isnt always tangled, but it often is. OOP is for more modular data

Exactly this.

If curious, past threads:

Why OO Sucks by Joe Armstrong (2000) - https://news.ycombinator.com/item?id=19715191 - April 2019 (380 comments)

Why OO Sucks - https://news.ycombinator.com/item?id=9481369 - May 2015 (16 comments)

Joe Armstrong: Why OO Sucks - https://news.ycombinator.com/item?id=4245737 - July 2012 (256 comments)

Why OO Sucks - https://news.ycombinator.com/item?id=474919 - Feb 2009 (114 comments)

Thanks, This it's one of those discussion people keep having at nausea, one of those simulacrums people invent so they have something to feel superior about.

> at nausea

In case the error was unintentional, _ad nauseam_.

[1]: https://en.wikipedia.org/wiki/Ad_nauseam

Or craftspeople enjoy talking about their craft, and we crowd around shared negative experiences because humans like to commiserate.

Sure explains why I don't like talking ad ofter software engineers

In my experience it's 50% commiserating and 50% optimistic daydreaming. I prefer the latter but the former has its place.

Years ago, I maintained (tongue in cheek) that OOP was a great paradigm not because it was inherently better, but because it was so bad that you had to rewrite your code several times to make it work. And once you've written something three or four times you begin to figure out your mistakes.

Then that Gang-of-Four "Design Patterns" book became popular and really screwed things up. I swear I didn't see factories-making-factories-making-factories until that thing was published, then I couldn't go a day without encountering yet another SingletonFactoryWorkerVisitor or whatever was cool that week, ugh.

I'm joking, of course. Except about Gang-of-four. And rewriting your code.

> I swear I didn't see factories-making-factories-making-factories until that thing was published

No? That's a pretty good description of the sort of metaprogramming that Lisp gives you, and Lispers are really enthusiastic about that kind of thing. People will dispute the parallels, but the real difference comes down to:

- non-Lisp languages being significantly less powerful

- Lisp's veneer of respectability

... which shouldn't be all that odd, considering the person who gave us "design patterns" was Dick Gabriel, a Lisper.

At least the "SingletonFactoryWorkerVisitor" programmers use a nomenclature that reflects the ontology the object is supposed to fit into.

My guess is that LISP programmers are largely a self-selected group of pretty decent hackers. I'm basing this on some exposure to Lisp Machines in the 1980s, and some Common LISP stuff in the 90s. Generally adults. LISP is my favorite language I'll never ship a product in (well, okay, after SmallTalk :-) ).

The Gang-of-Four-driven stuff I saw in C++ and Visual Basic (late 90s and early 00s) still gives me the shivers. Like, "Hey, factories are cool, let's make a bunch of them for no reason because we might need the flexibility someday." Sigh.

There is an effect where some things work well on some languages, but fail completely when translated to others because of superficially unrelated features.

Compile and run time code generation seem to be one of those things. Having a mostly pure language (even when there aren't guarantees) makes them much saner.

1 - Pre-compile time generation, by its turn seems to never work very well. And if there is developer interaction between the code generation and compiling, than it's always a disaster.

but lisp has homoiconicity so lisp factories create 1st class objects at runtime, including new 1st class lisp factory factories.

I think Design Patterns is often painfully misunderstood and misconstrued. My idea of using the factory pattern, for example, is not to write a MarriageFactory class, but to write a Celebrant class, and we may recognise that it embodies the factory pattern if that is helpful to the comprehension or authoring of the code.

And so it goes with the others, many of which I use extensively in my own work, but at most there's a comment at the top saying, for example "this business rules structure is in the chain-of-responsibility pattern".

That is the material use of patterns, they are not abstract frameworks. Naming things makes a difference: encouraging a taxonomy of purpose, rather than of form, goes a long way to re-orienting a programmer towards problem-solving with concretions rather than abstractions. As a consequence, Design Patterns, along with Refactoring and PoEAA, continues to serve as one my most-thumbed indexes of solution ideas.

Well goodness, I wrote a factory factory factory in OCaml only eight days ago:

    (* A goal that succeeds if v is one of `terms` *)
    let rec amb (terms : term list) (v : term) = match terms with 
      | [] -> null
      | term::ts -> disj (match_goal v term) (amb ts v)
    and null (s : state) = Mzero            (* a goal that always fails *)
    and ...
For those who don't speak OCaml (whose syntax is really quite unintuitive), that's a function named `amb` which takes a parameter called `terms` of type `term list` (in C++ syntax, that would be list<term>). It returns an anonymous function that takes a parameter called `v` of type `term`. This function may call `disj`, or it may return `null`, which is not a keyword, but actually another function I defined; it takes a parameter named `s`, which is of type `state`, and returns Mzero, which is at long last an honest-to-God singleton constant. (Of the parametric type `α stream`, as it happens, but let's not get distracted with that.)

So `amb` is, in object-oriented parlance, a state stream factory factory factory. Upon being passed a `term list`, it returns a factory, which, upon being passed a `term`, returns a factory, which, upon being passed a `state`, returns a `stream`. (I only traced the `null` path in this comment, but the `disj` path also returns a stream, specifically a `state stream`—OCaml's static type checking guarantees that both return paths return compatible types.)

You might think that isn't the way it's actually used, but my example code uses it in precisely that way:

    dump_stream (call_fresh (amb [Const 17; Pair(Const 20, Const 20); Const 11]) empty_state) ;;
Here `amb` is being invoked with just a `term list`. (OCaml separates list items with semicolons, so that commas always make tuples.)

So I think factory factory factories are very useful, but also I think they're really hard to keep under mental control without a functional language with a strong static type system. How on earth would you translate those four lines of code above into Java? With four inner classes, maybe, nested eight levels deep with {{}}?

If you're interested, I was translating μKanren, a Prolog-like logic-programming language originally written in 39 lines of Scheme, into OCaml: http://canonical.org/~kragen/sw/dev3/mukanren.ml

A factory factory factory would be a bag of hidden mutable state that could construct another bag of hidden mutable state that could construct another bag of hidden mutable state that could construct another bag of hidden mutable state, making it virtually impossible to track down where anything in the final result had come from - things could come from any mutation to the final result, or some but not all mutations to any of the intermediate results.

Without the mutable state it's not a factory, it's just a function, and there's no confusion and no problem.

I don't agree with either of the claims in your comment.

Although the GoF Abstract Factory pattern doesn't rule out having mutable state in your factory object, that is neither necessary for its purpose nor common practice when applying it in OO programming languages. The example in GoF is MotifWidgetFactory (for shitty Unix GUIs) vs. PMWidgetFactory (for shitty OS/2 GUIs), and in fact GoF says, "An application typically needs only one instance of a ConcreteFactory per product family," suggesting that they were thinking of cases where the factories don't even have immutable state such as the `terms` argument to `amb`.

They do explore a "prototype-based" factory that is stateful—but there the statefulness is just used as a way to configure the factory at startup, by building up a "product catalog" in it. Creating products doesn't modify the factory's state, and you could configure the "product catalog" just as easily by passing it in as an argument when the factory is instantiated; it's just that, in Smalltalk, that kind of thing is normally represented by segments of executable code that invoke a long sequence of side-effecting methods, rather than by data structures.

(You can also represent that kind of thing invoking a long sequence of pure functions, as in the "fluent interface" pattern that became popular with JQuery, but that isn't the Smalltalk way to do things, especially in 01994.)

The GoF examples other than the prototype factory are WidgetFactory and MazeFactory; in InterViews, WidgetKit, DialogKit, and LayoutKit; and in ET++, WindowSystem. Kerievsky's examples in Refactoring to Patterns (where he points out that people constantly conflate the various "factory" patterns, and that the lines between them are somewhat vague) are HTML parser AST node creation, ORM attribute descriptor creation, java.util.SynchronizedCollection and UnmodifiableCollection, and an OutputBuilder interface that allowed him to create either DOM nodes or XML output from the same unit test code. In all of these cases except the OutputBuilder, the factory objects are stateless.

So, by your definition that a factory is what you get if you take a function and add mutability, we find that 6 out of 7 of the GoF examples of "factories", and 3 out of 4 of Kerievsky's examples of "factories", are actually "functions" rather than "factories". I think this clearly shows that the categorization you propose does not coincide with the scriptural explanation of what "factory" means.

It's true that, in the case where there's no state, you can do this with just a pointer to a function. But pointers to functions don't exist at all in languages like Java, and aren't first-class values in languages like Pascal, where they are subject to various restrictions that keep you from using them like this.

Even in languages like C, which have first-class function pointers but not closures (except as a GCC extension subject to Pascal-like restrictions), you can't create new function pointers at run-time. So in C, or C++, you can replace a singleton factory like the ones GoF considers usual with a function pointer, or maybe a record of function pointers. But you can't write a factory factory that way, much less a factory factory factory.

So, while you're right that in a sense an immutable factory "is just" an indirectly-invoked function—that's the main point of my comment—if you want to do that in a language like Smalltalk or C++ (at least in 01994, when they wrote the book) or Java (at least in 02000), the Abstract Factory pattern in GoF tells you how to get it working.

> Although the GoF Abstract Factory pattern doesn't rule out having mutable state in your factory object, that is neither necessary for its purpose nor common practice when applying it in OO programming languages.

An object by definition encapsulates mutable state. Even if a particular object happens to not contain any mutable state, there's no way to observe that from outside, by design.

If your language forces you to use an object to represent an (immutable) function value, all I can say is don't use such a shitty language.

> An object by definition encapsulates mutable state. Even if a particular object happens to not contain any mutable state, there's no way to observe that from outside, by design.

If I read you correctly, your definition of "object" excludes each of the different concepts that C++, D, Python, and Abadí and Cardelli's object-calculus call "objects", but includes what Scheme, OCaml, and Kotlin call "functions"? You are of course welcome to use that definition, but it might improve communication to clarify that you're using the same word that other people use but with a completely different meaning.

> don't use such a shitty language.

There's a plausible reading of the GoF book and the whole "OO patterns" movement that it's largely about how to make do with shitty languages, though I'd nominate SICP as being better at that. Often languages that are shitty on one axis have compensating advantages on other axes—for example, although you can script Cocoa with cffi in Python, which is less shitty than Objective-C for scripting in general, you waste a lot more time debugging segfaults; despite the admitted aesthetic advantages of 68000 assembly with respect to amd64 assembly, the latter performs noticeably better on this laptop; and, while I think I've convincingly shown that a factory factory factory is dramatically more readable in OCaml than in Java, you're probably going to have a much easier time getting your remote dictionary server performance to scale to 16 cores in Java than in OCaml. If you have enough RAM, anyway.

> If I read you correctly, your definition of "object" excludes each of the different concepts that C++, D, Python, and Abadí and Cardelli's object-calculus call "objects", but includes what Scheme, OCaml, and Kotlin call "functions"?

That's not my intention; I'm trying to match the definition of an "object" in common programming languages. As far as I can see an object (instance) is essentially a bundle of method handles associated with an opaque (possibly empty) bundle of (possibly) mutable state. (In some languages the bundle might also include some visible state, but that doesn't change the essence of the thing, since that can be emulated with method handles - indeed Kotlin does exactly this). In Python's case the opacity of that state is a convention rather than a strict rule, but I think the general consensus would be that it should generally treated that way, and also that Python's objects are somewhat less object-ey than those of other languages. Certainly I'd say this definition is a close fit for C++ and Kotlin; what are you seeing as different there?

Hmm, I had thought that Python didn't allow you to add data attributes to objects derived from immutable classes like `tuple` or `str`, but evidently I was wrong about that!

    >>> class S(str): p = lambda me: len(me) + 3
    >>> S("hi").p()
    >>> S("uhoh").z = 3
You can't even detect this by enumerating the object's attributes with `dir()` or `.__dict__` at some moment — a method, or external code, can add new attributes at any later moment.

(Hypothetically in CPython maybe you could disassemble the bytecode of a method to see if it tries to mutate the object or some global variable, but even if this is possible, you can also use cffi to change the value of 3 in CPython. I regard both of these as breaking the semantics of Python per se.)

So, I was wrong about Python. And, on further reflection, I was wrong about C++ as well, because even if you provide a `const`-argument override for a function, which is a thing you can do to observe whether an object is `const` or not from the perspective of your caller:

    int x(int* foo) { return *foo + 3; }
    int x(const int* foo) { return *foo + 2; }
    // declaring as `const int i` changes program behavior:
    int main(int argc, char **argv) { int i = 7; return x(&i); }
there's not even a compiler warning if your caller fails to do so as well:

    int x(int* foo) { return *foo + 3; }
    int x(const int* foo) { return *foo + 2; }
    // int y(int* foo) { return x(foo); }
    int y(const int* foo) { return x(foo); }
    int main(int argc, char **argv) { int i = 7; return y(&i); }
D's `immutable` is externally observable in the same way as `const` in D or in C++, but rigorously so in the sense we want here: invoking a function whose parameter is `immutable` with a mutable object is a compile-time error.

Also, D's `immutable` is transitive, unlike C++'s `const`, but like, for example, E's DeepFrozen.

In the simple object calculus, everything is an object and there is no mutability at all. You could reasonably object (heh) that, although it's intended to capture object-oriented programming, and Abadí and Cardelli's book is called A Theory of Objects, it's not widely used, so opinions may differ as to whether they succeeded. https://news.ycombinator.com/item?id=26588642 goes into more detail there.

So your definition of "object" only excludes each of the different concepts that D and Abadí and Cardelli's object-calculus call "objects" (and I guess I should add E), not Python's or C++'s as well.

As for Kotlin, my claim about Kotlin was not about Kotlin "objects" but about Kotlin "functions", which are closures that can include access to mutable state, just as in Scheme and OCaml. I don't know Kotlin well enough to know whether your definition of "object" fits its definition of "object". It's an interesting question. Maybe it does! Kotlin's data classes, for example, can contain mutable attributes, and there really is no (non-reflective) way to observe them from another class — for example, for a method to refuse to accept an argument if it happens to be mutable in this way. Kotlin/Native evidently has a form of transitive and externally observable immutability, but I don't know much about it.

Your revision of the definition to include a bundle of method handles does seem to neatly exclude what are called "functions" in Scheme, OCaml, and Kotlin. I think it might also exclude objects in E, but since E is dynamically typed, that's sort of an implementation detail, and anyway E doesn't figure prominently in the consensus definition of the term "object".

Thank you for a very educational and stimulating discussion!

The funny thing is that rewriting your code a lot is much easier on FP with flexible types.

You just need to not pay any attention to design and start coding right away. Then you'll rewrite your FP code even more than you did for your OOP code, and still have an easier time overall.

In all seriousness, I've found that a very nice way to develop FP code. It always looks good at the end anyway, and there's no risk of getting into astronaut mode.

The only thing that changed after that book is that people started to put "Factory" into the name of factories and "Decorator" into the names of decorators.

Both existed for a long time before, but did not had names that would clearly instantly made them recognizable as such.

"Data structure and functions should not be bound together": not if you don't want that, but often you do, to avoid constantly asking the question "where's the code that messes with this data?".

"Everything has to be an object": obviously bad, but also not the case in most OO languages. So...bogus objection.

"In an OOPL data type definitions are spread out all over the place": Huh? This one is made up. You can put all your <whatever> in one place if you want to.

"Objects have private state": this is a feature not a bug. Exhibit A: JavaScript.

I mean, OO isn't impervious to this: "Where is the factory to create this object?" "Where are the factorIES that create this object???" "Where is the service to perform this action on this object?" "Where is the manager that coordinates these objects????"

This can be solved with some knowledge of DDD (for example) which applies to both paradigms.

in ruby everything is an object. Of all the things people complain about about ruby, I don't think that's one of them, I don't see how it's a problem.

> "Everything has to be an object": obviously bad

Personally I prefer when it's all objects. Otherwise you wind up with primitive types that you can't do all the objecty stuff with. Then you wind up with hacks like Java's int vs Integer dichotomy.

Compare this with Python where the internals of the integer type are so hidden that it can do things like seamlessly replace it with a bigint. What it is inside doesn't matter because you only care that it answers to the same messages.

What are your thoughts on project valhalla on the JVM? I think making structs object-like is a win as ultimately having everything involve extreme pointer chasing is wasteful

I found JS to be one of the better OO implementations in the wild. All these OO additions since ES2015 felt like a step back to me.

> where's the code that messes with this data

That just sounds like a code organization problem not an argument in favour of OOP.

> but also not the case in most OO languages

The fact that a lot of "OO" languages don't enforce this doesn't change the fact that the general sentiment around object oriented programming as I see it taught and as I see new programmers approach it is that "everything has to be an object."

The problem with OOP is not just OOP languages but also OOP programmers who you have to deal with on a regular basis and have to explain that "this could just be a function, why do I need to instantiate a class to do this basic task?"

> Huh? This one is made up.

Once again, this doesn't matter in light of how people ACTUALLY write code in those languages and what actually is the status quo.

> this is a feature not a bug

Not sure I know enough about javascript to understand what you mean here.

Hiding state inside things is fine when it's done correctly, more often than not in reality when people write object oriented code it is done poorly. I certainly haven't had the same troubles understanding what happens to state when reading someone else's pure functional code vs when I've had to read someone else's object oriented code.

> where's the code that messes with this data?

Ouch! Try answering that in some Java Spring or C# MVC code. Then try to answer it in some not class-based Django or Rails code.

Unfortunately, the article did not have what I think is the best Joe Armstrong quote on OOP.

Here it is as quoted in:


>I think the lack of reusability comes in object-oriented languages, not functional languages. Because the problem with object-oriented languages is they’ve got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.

Thanks, this added extra insight for me

I'd suggest that OOP has it's niche. Primarily it's a methodology for limiting the effects of changing state while keeping the codebase performant. Removing state where possible is obviously going to be the safer course of action, but immutable data structures are not as efficient as their mutable cousins. This may not matter for much software, but where it does, OOP isn't the worst solution when used selectively.

The problem with OOP in 2021 is that it tends to be the default, when arguably starting from safety and working back toward performance would be a better approach.

"Safety" and "performance" are not always the most important considerations. For example, Apple uses OOP so that its frameworks can evolve without breaking client apps. NSDictionary is a dynamic object because it permits the implementation to be changed or replaced, and this comes at a cost of performance.

Polymorphism isn't a trait unique to OOP; most, if not all, FP languages have that as well.

Right. But OOP makes it central, and builds around it, while FP de-emphasizes it in preference to ADTs.

Strings are a good illustration. Instead of an abstract polymorphic String type, Haskell provided a concrete String type as an ADT. This proved too inflexible, which is why we have Text, lazy Text, ShortText, etc. Compare to NSString which also has multiple representations but hides them behind a single polymorphic interface.

"OOP makes it central, and builds around it, while FP de-emphasizes it in preference to ADTs."

ADTs are not an intrinsic part of FP, as not all FP languages even have them.

I'd also question whether ubiquitous polymorphism is overall a good thing in a language, or whether it's misguided complexity. In most OOP languages, any public method can be polymorphic, but a polymorphic function is inherently less predictable than one dispatches off a single type.

This sounds like haskell’s backpack, which lets you swap implementations of a model interface at will. But maybe I’m misunderstanding what you meant.

Polymorphism is front an central to everything in Haskell, which is why your comment sound off to me.

Or you use -XOverloadedStrings and then you also have in Haskell multiple representations that follow a single polymorphic interface.

It does not. -XOverloadedStrings unlocks `fromString :: String -> a.` That is not a polymorphic string interface; it's just syntax sugar for making something else from a string literal.

I like Elixir and Erlang so agree with the the typical arguments against OOP, but I actually think value-based (as opposed to reference based) OOP works quite well and jives with functional programming. The best examples of this, that I know of, are LabVIEW, with its dataflow and value-based OOP that includes interfaces, and F#’s object programming, where immutable types such as records and discriminated unions can implement interfaces.

I rarely see points like your’s raised (reference based versus value based) in the OOP vs FP debate, but these nuanced points really play a big role in the ability to produce quality code. I have constantly gone back and forth on my opinion on OOP vs FP, and it mostly comes down discovering an expressive feature of a new language tipping me back to the other side. Using an OOP language with structural typing was one of these instance. Structural typing isn’t really an OOP thing, but that highlights how blurry the lines can actually be as to what is considered as OOP vs FP.

OOP just isn't really defined. Maybe originally there were some more precise definitions, but nowadays everyone understands a different thing.

Therefore it's pretty meaningless to argue about OOP without giving a definition beforehand. Just looking at the threads here, there are so many discussions and contradictions simply because people make different assumptions and it often boils down to the true scotsman problem.

I agree, and find a large proportion of https://softwareengineering.stackexchange.com is people arguing backwards, starting from 'OOP is the best way to solve this' and then trying to figure out what they mean by 'OOP' that context.

Objects are a useful pattern. Hiding data behind a suite of methods is great if you need to polymorphically treat a set of disparate data types in the same way, or if you want to expose an API to consumers and then change the implementation behind the scenes.

The problem with object-oriented programming IMO is that it demands we use this pattern even when it doesn't make sense. Sometimes you should just use a struct or a function. And when you abstract things that didn't need to be abstracted, your code becomes bloated and hard to understand.

OOP also encourages an outlook where objects are things with agency which do things and have responsibilities. Data should, more often than not, just be data; when you think of data as being the thing that has the agency, you risk winding up with classes like `ThingDoer` that have methods like `ThingDoer.doThing()`.

What if all this... A sucks use B... Is just... Different strokes for different folks?

Meaning, due to our brains being differently wired some forms of programming might be easier to understand and use for some people while other for others

If you assume "brain wiring", then you're basically making the case that everything is relative, and there's really no way to improve things beyond the state they're in. Programming can never get any better, and languages can never improve, because it's all different strokes for different folks.

But we definitely know how to make things worse for just about everyone. Use brainfuck. Or Piet. Or a straight-up turing machine.

And if we can make things (pretty much) objectively worse, then it's very likely we can make things objectively better. And there's not much reason to assume we've already reached some kind of end-game, where we're at the tip of the spear for language design, or even the core fundamental models of design.

I mean shit, we're only like 60 years into the subject. The legendary programmers of lore still walk the earth.

It might take some rewiring, but I for one believe there is a much better world for us to program in, than C, Java and C#.

I think much more likely than different "brain wiring" is universal brain damage, from working, breathing, living, with the languages we know, love and despise today. Brain damage might be harder to fix however; in which case we're dealing with language advances one funeral at a time.

I think, as of 2021, terms like "OO" or "OOP" (as well as "AI") should be avoided from serious technical discussions. Its definitions diverge a way too much from person to person, and it often leads to blanket statements (like "why OO is bad"), added confusion and result in unnecessary drama. We always strive for clarity.

I think just being aware of ambiguity and the damage it can do can help fix a lot of things.

Just think how "free", "justice", "advertising" can all be used to unwittingly miscommunicate, or deliberately mislead.

I highly recommend "Object Oriented Programming is Bad" and "Object Oriented Programming is Embarrassing" on Youtube.



"In an OOPL data type definitions belong to objects. So I can’t find all the data type definition in one place. In Erlang or C I can define all my data types in a single include file or data dictionary. In an OOPL I can’t - the data type definitions are spread out all over the place."

To me, this is the main argument. In many cases, I think we can go even further, defining the entire data model in SQL, letting it be the single source of truth on what the data model is, letting all code outside of the database adhere to it. The inverse of an ORM.

Fyi... the previous submission 2 years ago attracted 380 comments and the top comment says Joe Armstrong changed his mind on some of it. Apparently, the original blog post is actually dated 2000 and not 2019.

HN's "past" link: https://hn.algolia.com/?query=Why%20OO%20Sucks%20by%20Joe%20...

For people on mobile, here's Armstrong's quote, via @rhblake:

"I wrote a an article, a blog thing, years ago - Why object oriented programming is silly. I mainly wanted to provoke people with it. They had a quite interesting response to that and I managed to annoy a lot of people, which was part of the intention actually. I started wondering about what object oriented programming was and I thought Erlang wasn't object oriented, it was a functional programming language.

Then, my thesis supervisor said "But you're wrong, Erlang is extremely object oriented". He said object oriented languages aren't object oriented. I might think, though I'm not quite sure if I believe this or not, but Erlang might be the only object oriented language because the 3 tenets of object oriented programming are that it's based on message passing, that you have isolation between objects and have polymorphism.

Alan Kay himself wrote this famous thing and said "The notion of object oriented programming is completely misunderstood. It's not about objects and classes, it's all about messages". He wrote that and he said that the initial reaction to object oriented programming was to overemphasize the classes and methods and under emphasize the messages and if we talk much more about messages then it would be a lot nicer. The original Smalltalk was always talking about objects and you sent messages to them and they responded by sending messages back."

Sure but Smalltalk also has classes and inheritance where messages are passed between objects. Kay invented the term, but he doesn't control the design of other languages or how OOP ended being understood. Simula also proceeded Smalltalk, and it influenced the design of the C++ object system.

I have no evidence other than the observation that OO mimicked the taxonomy used in the natural sciences. Example: Homo sapiens is of the class mammalia and of the kingdom animalia. This taxonomy originated from Aristotle to describe the world. [1] The problem is that software is needed not to take a static snapshot of the world (state). Software is needed to automate and report on changes (transactions).

Aristotle wrote about change and thought about change in four ways. The material from where object came from. The form or template or shape of the change. The efficiency or agent that caused the change. The purpose or reason or final cause of why the change was made.

So instead of classes and methods. If we followed this paradigm, we could be now discussing forms and changes.

Of course business folks would think this is all silly. They speak the language of accounting which records transaction entries in a journal. Each month the books are closed and the transactions are summarized as the closing balance. Instead of classes, the talk about general ledger codes. Instead of methods, they talk about entries and reverse entries to correct an error. Instead of state, they talk about auditing the entries to verify the balance.

[1] https://davesgarden.com/guides/articles/view/2051

OOP (as Alan Kay conceived it) was explicitly inspired by biology. Objects are cells and communicate through exchanging messages. State is local and hidden, and data itself disappears, which means that the program as written may be ignorant of how operations are performed.

The irony of course being that Erlang ranks as one of the most OO languages, if one accepts the Alan Kay concept of OOP, which is rather more behavioural than structural, and thereby entirely compatible with the functional paradigm and algebraic forms generally.

See also https://www.youtube.com/watch?v=fhOHn9TClXY

A bunch of these objections are incorrect, or at a minimum apply only to certain implementations of OOP (both from an organizational and linguistic perspective).

"Everything has to be an object" is only true in some languages, and even then it's a questionable claim. If I have a Java class that has public data members only, how is that materially different from a data structure?

"Data type definitions are spread out all over the place" Again, a questionable assertion. It's plenty easy to define all your data types in one package or header file in Java or C++. Some folks choose not to organize it that way, but that's a different and substantially weaker objection.

"Objects have private state" Unless you want all parts of your system to be able to depend on the internal state of all other parts of your system, some kind of state access control is going to be necessary. I'm not aware of any system, OOP or not, where the internal details of, e.g., the Mutex datatype are available for inspection. This is private state by a different name.

"Why was OO popular?" I have a simpler explanation. For many years, the most performant unmanaged language with something even close to resembling a strong or useful type system was primarily an OO language (C++). There were no close competitors. And the fastest managed language was also an OO language (Java). I also deny the proposition that OO is materially harder to learn than other paradigms, supposing you want to learn in those other paradigms how to do the sorts of things that come built-in in OO languages.

> Objects bind functions and data structures together in indivisible units. I think this is a fundamental error since functions and data structures belong in totally different worlds. Why is this?

> *Functions do things. They have inputs and outputs. The inputs and outputs are data structures, which get changed by the functions. In most languages functions are built from sequences of imperatives: “Do this and then that …” to understand functions you have to understand the order in which things get done (In lazy FPLs and logical languages this restriction is relaxed).

> Data structures just are. They don’t do anything. They are intrinsically declarative. “Understanding” a data structure is a lot easier than “understanding” a function. Functions are understood as black boxes that transform inputs to outputs. If I understand the input and the output then I have understood the function. This does not mean to say that I could have written the function.

Mostly trivial definitions plus a bizarre definition of "understand" for functions, the relevance of which is unclear.

> Functions are usually “understood” by observing that they are the things in a computational system whose job is to transfer data structures of type T1 into data structure of type T2.

Not even true. Oftentimes a function's job is to change the state of a data structure, or compute a new structure of the same type.

> Since functions and data structures are completely different types of animal it is fundamentally incorrect to lock them up in the same cage.

Sloppy metaphor that doesn't follow from any of the previous gibberish.

We've had FP, we've had OOP, the next paradigm to look out for is Data Oriented Programming. Tools like ECS with their systems and components have started to become rather popular in the gaming world, and is slowly spreading outside it while we find applications for it.

I understand it's not the best tool for the job, and object oriented is most likely is here to stay, but we need to broaden our horizon so we have more tools where applicable!

Edit: Looks like this may have been addressed in a later interview? https://www.infoq.com/interviews/johnson-armstrong-oop/ (shared by frederikholm below)

This is a critique of mainstream "OOP". But the term "Object oriented programming" as coined by Alan Kay meant something different[1]. He was referring to something like actors: Tiny independently-operating machines (which, yes, contain hidden state) that provide answers in response to messages sent to them. You intentionally can't know how they work--they should behave as little computers.

I think this is a powerful paradigm which has still not been fully realized. (NSDistantObject in Object-C comes close?)

I am excited for the new async/await stuff coming to Swift since it's explicitly moving towards an actor-based programming model. (While Swift, in general, tries to eliminate some of the negative aspects of previous implementations of OOP.)

[1] http://www.purl.org/stefan_ram/pub/doc_kay_oop_en

> Tiny independently-operating machines (which, yes, contain hidden state) that provide answers in response to messages sent to them. You intentionally can't know how they work--they should behave as little computers.

You mean... like an Erlang process? :)

Maybe? You tell me. :)

Do you communicate with them via messages? I guess I am biased because I always think of messages more like Obj-C/Smalltalk style

    obj doSomethingWithArgument: arg1 otherArgument: arg2
Vs something more like a protobuf or whatever. I guess as long as a message send in the language looks like a method invocation it's all equivalent.


    "Is Erlang object oriented?"

    "From that point of view, we might say it's [Erlang] the only object oriented language and perhaps I was a bit premature in saying that object oriented languages are about."

I think a lot of people think they need OO but they really just need modularity.

Contracts are actually quite necessary for modularity. In OOP we have data types (interfaces, classes).

As long as other paradigms provide contracts/abstractions, modularity can be achieved there also.

Faked modularity doesn't count. Leaky abstractions neither.

As a professional dev who has made a career out of working in "OOP" languages and codebases, I kind of agree. The reason why Functional Programming concepts are increasingly adopted by "OOP" languages is because Functional Programming is objectively better.

What makes functional programming objectively better? Are there studies which show this to be the case? What does better mean? Better for some people who prefer the functional approach? Better for some languages which support functional programming? Better in some situations which functional programming is well suited? Or just always better for everyone?

Just look at all the "OOP" languages where both the language developers and the user community are coming around to the facts that

* immutability and absence of state is preferable to mutation and statefulness

* Option/Maybe types (or "nullable types" which are a shoddy implementation of the same thing) are better than null

* Either/Result types are better than exceptions

* making things implement map, filter etc and sending in a function that describes what you want to do is better than manually eg looping through lists etc

etc etc etc

Anyone who doubts how endorsed this is, just read what Brian Goetz and Josh Bloch have to say about how to code in Java.

Just imagine if these languages had been implemented with these ideas in mind from scratch instead of the current situation of trying to adopt and retrofit this style when the core libraries fundamentally don't support it.

The current trend of "OOP" languages is basically inexorably heading towards FP and abandoning the old school "OOP" style. Eventually they will only be nominally "OOP", mostly in order to please people who have irrational attachments to labels like that, but be way more FP in nature and in all but name.

For what it's worth, people shouldn't be irrationally attached to the "FP" label either. Labels aren't important - what matters is the code, how easy or hard it is to reason about it, how well it avoids entire categories of defects from even being possible etc etc.


There are all pretty terrible objections, IMO.

The first objection is extremely debatable. By packaging state and behavior and using interfaces and/or inheritance you can do certain things very flexibly that are just harder otherwise.

The next two don't apply to OOP in general. In most OOP languages I use not everything has to be an object, and you can put interfaces all in one place it you want,

And the last is just wrong. State exists, even in functional patterns which tend to hide the state in closures even more strictly than objects! Monads aren't particularly different from object state in that you'll need some way to inspect the state, it's not always visibly locally from the PoV of the consumer of the object or monadic API.

Yes! Every functional codebase I see seems to be littered with the artifacts of an effort to escape the reality that the known universe and everything that happens within it is stateful... including our code. It’s inescapable, so why does the FP crowd make state even more difficult to reason about through batshit crazy abstractions? I’m honestly completely baffled by the FP movement and the trend of talking down OOP as some simpleton concept that should be abandoned.

Nothing in the universe is stateful. You only think otherwise because you impose an object abstraction on things and treat their evolution over time in a non-rigorous way. This lack of proper modeling of the effects of time on a system is what leads to all the problems with managing state.

FP makes this relationship of change to time explicit. Instead of having a "single object" that "changes state" when some event occurs, you have two objects - one before the event, one after. Systems that rely on mutable state conflate these two and then force you to deal with the consequences of that conflation.

Does that mean computers aren't stateful? You have one computer before an event and another after?

Not quite. The computer you're referring to has one state before an event, and another state afterwards.

Modeling those as two distinct, immutable states provides enormous benefits over modeling it as a single state that "mutates". Doing this eliminates a large class of serious bugs.

The reason for these benefits is that far from trying to "escape the reality that the known universe and everything that happens within it is stateful," functional programming uses a more rigorous model of state that better reflects the relationships between states over time.

You’re points assume that OOP programs consist of a single god class stuffed full of nothing but mutable properties... which is obviously not sound OOP design.

It doesn't have to be god classes. Any mutable property that's accessible beyond a local scope raises these issues, and involves the collapsing of state over time.

That's not to say it's never valid to make the conscious engineering choice to do that, but it's a poor default to have. Functional programming languages have proved that this can be handled better.

Don’t closures effectively break down this FP facade of local scope only mutability?

Not necessarily - it depends on the language and how closures are implemented.

For example in Java, closures, created via lambda expressions or anonymous classes, can only access immutable ("final") variable in their enclosing scope. (At least that was true in Java 8, not sure if that was relaxed later.) So they don't export mutability of local variables. (Although they don't prevent you from mutating object fields from with a closure - again, a consequence of the host language semantics.)

Of course, closures in other imperative languages often violate FP immutability constraints. But that's only because they're given unrestricted access to the underlying imperative language features.

An interesting case is OCaml, which allows closures to mutate ref cells, e.g.:

    let x = ref 5 in 
      let set y = x := y in 
        set 4; 
...which outputs 4. This is compatible with OCaml's approach to mutation, and the mutable variable is at least not the default variable type - it's an explicitly mutable cell value. But this certainly breaks various good properties that immutability provides.

For example, the function `set` above provides no indication at the type level that it performs a side effect, which undermines the ability to reason about function behavior without examining the implementation.

If you want to say OCaml is only providing an FP facade which is broken by this behavior, I'm sure many Haskell programmers would agree with you. :)

Btw here's a relevant article, "Closures don't mean mutability": http://blog.agiledeveloper.com/2017/01/closures-dont-mean-mu...

If you're using them to pass around mutable state, yes.

Which is why immutable/pure-functional is always the best first choice.

If it's inconvenient to do something with pure functions, and you want to pass a bit of state around, you can use the State monad - it's 'simulated' mutable state, in that it's all pure functions under the hood, but it looks and feels like you're doing mutation.

If it's too inefficient to simulate state via the State monad, you can use state threads (ST). This is more efficient in that it compiles down to real mutations, but it's safe in that you can't share your mutations until you've exited the ST. So it should force determinism. This is what you'd use for a fast, mutable, in-place sort.

The above two solutions rule out having multiple writers, which is where STM (software transactional memory comes in). STM gives you atomic blocks which behave more or less like (in-memory) database transactions. I mainly use it for implementing workers and jobs and queues, etc.

And if you actually just want to do anything then you use IO.

It's more about being aware (in a compiler-checkable way) whether you're mutating or not than it is about outlawing mutation.

> everything that happens within it is stateful

If that were the case I'd just fix your comment instead of appending another comment after the fact. But that would make the arguments harder to reason about I think.

> including our code

Do you use git? Or does your team all just edit the same code on a shared server? (The OO solution here is to make state 'private'. It supposedly doesn't really matter than you're all editing the same documents at the same time, because they're behind getters and setters)

> why does the FP crowd make state even more difficult to reason about?

State is difficult to reason about so we maximise out stateless code.

> talking down OOP

I talk down OOP when I treat it as the diff between OOP and FP. That is, coding is 80+% the same whatever you use. It's the last 10-20% where the differences emerge: Lambdas are better than anonymous inner classes. Not having null is better than having null. Not needing to cast is better than casting. Composition is better than inheritance. Stateless is better than stateful. Value-equality is better than reference-equality. Generics + type erasure is clunky without higher-kinded types.

> The OO solution here is to make state 'private'. It supposedly doesn't really matter than you're all editing the same documents at the same time, because they're behind getters and setters

This just isn't true in the slightest sense. Honestly, where does this kind of anti-OOP trope come from?

It isn't a direct bashing of OOP. It's a bashing of the claim (see wiki quote) that you can make state manipulation safe by hiding it away inside a class.

    Encapsulation is an object-oriented programming concept that binds together the data and functions that manipulate the data, and that keeps both safe from outside interference and misuse. Data encapsulation led to the important OOP concept of data hiding.
It's arguably the core concept of OO, and I don't believe it keeps data 'safe from outside interference and misuse'.

So doesn't FP make similar claims about mutation, i.e., "make state manipulation safe" through mechanisms like closures? How/why are closures considered more palatable?

I agree with Martin Odersky that functional programming within an OO approach to code organisation is the most sensible trade off between maintainability and bug reduction.

> Data structure and functions should not be bound together

Maybe not always. I suspect, based on Kay's writings, that Smalltalk's "everything is an object" thing was more about experimentation. They were trying to push one simple idea as far as possible, in order to see just how far they could push it. It turns out you can go pretty far. That doesn't necessarily mean you need to.

That said, the most popular language that shies away from "everything is an object" - Java - does so in a way that doesn't work well. Primitives are not objects, sure, but primitives then integrate poorly with the rest of the language (especially since Java 5), and more complex data structures must be objects, even if you don't want them to be.

Ironically, my favorite language for showing the nice things you can get by relaxing the "everything is an object" ethos, F#, actually does make everything into an object. (It has to, because .NET.) But it doesn't force you to think about them as if they were objects when you don't want to. So the mental space you live in when you're using the language feels primarily functional.

But sometimes objects are useful. The key distinction here - and the thing that Armstrong seems to miss in this essay - is that methods aren't just functions that have been glued to some data. Technically I suppose they are - that's certainly how it works when you roll your own OOP in a language like C - but it turns out the whole is more than the sum of its parts. Because you get a really useful thing that isn't so convenient to do when you keep your functions and your data separate: dynamic dispatch.

Dynamic dispatch - not encapsulation - is the killer feature of objects. Late function binding allows you to operate over heterogeneous inputs where the only thing you want to enforce is that they obey a protocol, without having to do mess of explicit conditional branching or having to have all the details pinned down at compile time. Functional languages without any OOP facilities run into convenience problems here. Typeclasses cover some of the same use cases, as well as others that objects and interfaces don't handle very well at all, but they're statically bound, so they can be somewhat less flexible. It's often stated that design patterns are just a way of making up for a missing feature in your language. Well, the command pattern is what you do when your language doesn't support OOP.

(For another example of where Armstrong was clearly operating under some misapprehensions when he wrote this, look at the truly bizarre statement he makes in the last paragraph of section 3.)

> Late function binding allows you to operate over heterogeneous inputs where the only thing you want to enforce is that they obey a protocol…

You can do that in typeclass-based languages as well, e.g. in Haskell:

  {-# LANGUAGE ConstraintKinds, ExistentialQuantification #-}
  data Constrained c = forall a. c a => C a
  showAll :: [Constrained Show] -> [String]
  showAll xs = map (\(C x) -> show x) xs
  main = foldMap putStrLn $ showAll [C (1 :: Integer), C ("abc" :: String), C (3.14 :: Double)]
The `showAll` function takes a list of objects that implement the `Show` typeclass. The objects themselves can be of different types. The syntax is slightly less ergonomic, but the runtime effect is much as you would expect from an OOP language—a pointer to the typeclass dictionary ("vtbl") is paired with each object, and the address of the appropriate `show` method is looked up at runtime for each element of the list.

Yeah an example of typeclasses being less flexible to me is if what you really want is OOP polymorphism for something like being able to choose what code to run based on configuration. Given that you can add objects to your class path at runtime, you can clearly get some extreme flexibility out of a system without clear parallels in typeclass based polymorphism. Obviously this isn't something you'd want to do all the time, but it's an example of something I think about periodically when comparing the two approaches.

With typeclasses, polymorphic behavior is driven strictly by the type, and one must follow that thought to it's conclusion to see the difference. Sometimes it's more useful, sometimes not.

Also: Why do we need modules at all? by Joe Armstrong


> Objection 3 - In an OOPL data type definitions are spread out all over the place

Conventionally, that's true in non-OO languages with a good reuse story: functions are usually packaged with the definitions of and factories for the data structures they work on in modules.

Are there non-object oriented languages that are popular today? Python, Ruby, JavaScript, even PHP.

I’d like to hear some things from people who only started programming in languages like that and know no other way, but have maybe learned C or something.

Well, to the extent that TIOBE is good for anything I suppose it's good for this. The #1 language for March 2021 is C, a decidedly non-OO language. #2-8 are all OO languages or languages that promote an OO style. #9 is assembly (really?). #10 is SQL, hardly OO, it's a declarative language with no pretensions of OO-ness that I've noticed. #11 is Go which is only kind of an OO language, but not really in the sense most people mean when they discuss OO. Of #12-20, most are arguably OO languages of various flavors except for R, Perl, Matlab, and maybe Classical VB (only because I don't know exactly what they mean by that, how far back do we have to go to get to Classical VB? The fall of the Roman Empire? Is it primarily used with an abacus?). Below the top 20 down to 50, the remaining languages are mostly not OO languages, or their OO components are best considered a secondary or tertiary feature (Ada, for instance).

So, yes there are popular non-OO languages today. No, they probably don't dominate in overall interest or use, at least outside certain domains. But people are even putting Javascript on microcontrollers these days so it won't be long before it's OO everywhere. Hell, OO (via Java) has already been to Mars.


>> and maybe Classical VB (only because I don't know exactly what they mean by that, how far back do we have to go to get to Classical VB? The fall of the Roman Empire? Is it primarily used with an abacus?)

funny enough I worked at a place a year ago who’s main language was VB6 and they were terrified of objects. Mostly exposed to it on the web side with python and php written by two new developers

Oddly some of their best code was some VB6 using OOP that someone had snuck in sometime years ago.

I picked up languages like that first and then picked up c later.

I tend to miss lambdas more than classes in c.

I'm not big on large class hierarchies, so classes tend to be more about conceptual organization to me. And throwing a bunch of related function pointers into a struct and passing a pointer to the struct itself almost kinda looks like a class if you squint hard enough. It keeps everything organized.

Packing everything I need into a custom struct, coercing it to a void pointer, and passing it somewhere to emulate a closure, on the other hand, feels dirty.

Python and PHP aren't object oriented languages. They are structured languages with optional objects.

Javascript is object oriented but has it's own definition of "object" that differs from every other language, so people try to abstract it away and write most of their code in some other paradigm.

Lost this guy way too soon :(

I always thought of OOO as a dsl. An object with methods and properties has an easier mental model than state transitions. You can still implement such objects with immutable state under the hood

Armstrong's death was one of the few that I was sort of hit by, despite not knowing him personally. I've watched so many of his talks, read as much as I could from him, and I've generally found him to be quite an inspiration and seemingly a great guy.

More on the point of the article, it's sort of fun to think about the fact that "OO Sucks" is sort of ironic given Alan Kay's initial description of OO was closer to actors than what we think of OO today (he acknowledges this at a later time).

As Kay puts it:

I'm sorry that I long ago coined the term "objects" for this topic because it gets many people to focus on the lesser idea. The big idea is "messaging" [..] The key in making great and growable systems is much more to design how its modules communicate rather than what their internal properties and behaviors should be. [0][1]

This is very much in the spirit of Armstrong's quote in the article:

> Since functions and data structures are completely different types of animal it is fundamentally incorrect to lock them up in the same cage.

Armstrong talked a lot about how shared mutable state was wrong on a fundamental level - it "breaks reality(/causality)" and that sort of thing. Again, sort of fun to think about the fact that the core ideas with actors seemed to have an origin in an early focus on asynchronous 'cell'-like computers, like the JOHNNIAC in 1953[2], even though the foundation of the model wasn't named or formalized until the 70s.

"Its designers began with the hope of stretching the mean free time between failures and increasing the overall reliability by a factor of ten". Systems like JOHNIAC where IAS machines - asynchronous CPUs. These worked through causality, not synchronization.

In Armstrong's thesis[3] 'Making reliable distributed systems in the presence of software errors' he references papers like [4] 'Why do computers stop and what can be done about it? Technical Report 85.7, Tandem Computers, 1985.', which talks again about isolated processes and transactions as a foundation to reliable computing - over 30 years later.

This whole area is so deeply fascinating with a century of repeating, refining ideas, and I found that just by reading what I could from Armstrong I had a sort of rough guide through this area. There are these cool ties to early ideas about AI, and I guess a lot of people though that languages should model life, and later Kay talks about this as well, and funny enough AWS now has "cell based architecture" - their discipline built around isolation and fault tolerance. A lot of this is sort of just random connections from jumping from paper to paper - but I just found it all really cool.

Reading this thread, it almost seems like people don't know who Joe Armstrong is? Or at least they've missed a lot of the point. This isn't an "X vs Y" from some rando, he built Erlang. It's also not about functional programming.

I highly recommend reading what he has to say, and watching his talks.

[0] http://wiki.c2.com/?AlanKayOnMessaging

[1] https://computinged.wordpress.com/2010/09/11/moti-asks-objec...

[2] https://www.rand.org/content/dam/rand/pubs/research_memorand...

[3] https://www.cs.otago.ac.nz/coursework/cosc461/armstrong_thes...

[4] https://www.hpl.hp.com/techreports/tandem/TR-85.7.pdf

In case you don't already know about it; Handbook of Neuroevolution Through Erlang by Gene Sher is a great demonstration of the approved way of using Armstrong's ideas.

Thank you! Unfortunately the part of my career where I had a lot of time to dive into this sort of thing is behind me (and hopefully ahead of me too), but I'll add that to my backlog.

Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact