Hacker News new | past | comments | ask | show | jobs | submit login
Object-oriented programming: Some history, and challenges for the next 50 years [pdf] (pdx.edu)
102 points by ingve 14 days ago | hide | past | favorite | 99 comments



> which started at the Norwegian Computing Center in the Spring of 1961

The date is wrong; only Simula 67 qualifies as an "object-oriented" language (the term was not used then though). The development of Simula 67 started in 1966. The author probably means the predecessor language, which did not have the properties associated with object orientation (moreover, it was a simulation language, whereas Simula 67 was designed as a general purpose programming language).

> Smalltalk-72 ... was clearly inspired by Simula, and took from Simula the ideas of classes, objects, object references, and inheritance

Smalltalk-72 had no inheritance. Kay was aware of it but didn't like the idea. Only Ingalls added it to his Smalltalk-76 implementation.

Not the only inaccuracies as it seems.


People disagree widely on what OOP really is/means. I've read and been in many definition debates. Even defining what inheritance, polymorphism, and encapsulation is, is also a can of worms.

Well, some people disagree, as usual; not really an issue; if you need a theory and formal semantics, consult the standard work: https://www.amazon.com/Theory-Objects-Monographs-Computer-Sc....

That's just one person's model. It may be an interesting model, but it's not necessarily "true" in the sense of acceptance. Type theory is also a model, not a definition. It's useful in the practical sense, but that doesn't necessarily mean it defines "types". Competing models may be developed.

I think Alan Kay himself has suggested that what he really wanted when thinking of OOP was something more akin to the Actor model as espoused by Hewitt and realised by Erlang and to a lesser extent Scala on the JVM.

This model gives full benefit of FP at the actor level whilst being realistic about distributed messaging semantics.

I’ve always felt that it would be interesting to research processor cores that could better exploit the assumptions in this model, and this would be the future of both multi-core and distributed computing. Actor message queues would be in hardware, there would be opportunities to reduce context switching overhead between actor activations, memory models would be more fine grained, capability based security would be ingrained.


I think so too. FP does not work over a distributed system, because it requires to view "the world" in one chunk. But that does not scale.

Using actors is a great way to chunk the worlds into smaller parts, each of which can then be handled in fully FP manner.


It depends on what the meaning of the word 'scale' is. Analytics is definitely using a ton of FP-flavored computation at gigantic scales. The #1 principle of FP is the insistence on computation with no side effects. Relational algebra (SQL, MapReduce, Linq, etc.) is adhering to this principle, making it arguably the most successful application of FP, running some of the largest computations on the planet.


The parent wasn't talking just about scale, but distributed systems at scale. You could argue that making a call to a db is a distributed system but that is usually not what is meant. FP without side effects is excellent for transforming immutable data. History is by definition immutable so it suits that. It doesn't fit quite as well with a distributed system that has changing state, where the current state of the system is the primary concern for users.


Obviously. Yet the 'FP doesn't scale for distributed systems' is too broad. MapReduce / Flume / Spark / BigQuery are distributed systems. What he probably meant is one can't use pure FP for OLTP at scale. This is again a bit too broad, as most systems benefit from having a FP 'business logic' core with some mechanisms to integrate with the rest of the world. Often times distributed systems are build of purely functional layers talking RPC with the lower layers. Even Erlang, the canonical example of 'we really meant actors when talking about objects' is heavily functional in its design.

For reference, I use FP as in 'eschew mutable state', and not 'let's use category theory to solve FizzBuzz using compile-time type metaprogramming'.


Just a side note; it is entirely false that FP cannot work for a distributed system. You could say the same for any program that's not simply a batch computation from input to output.

Luckily things like functional reactive programming exist which allow longer running programs to be written (purely) functionally. The same can be done for distributed programming.

Highly recommend this blog post if you want to learn more: http://conal.net/blog/posts/can-functional-programming-be-li...


> I think Alan Kay himself has suggested that what he really wanted when thinking of OOP was something more akin to the Actor model as espoused by Hewitt and realised by Erlang and to a lesser extent Scala on the JVM.

> This model gives full benefit of FP at the actor level whilst being realistic about distributed messaging semantics.

It's interesting you bring that up; I normally am very much on the 'Use interfaces and Dependency injection instead of inheritance' camp. And yet, When I work with Actor Model frameworks (Mostly Akka/Akka.Net) I find places where using some simple inheritance is insanely powerful yet intuitive. You wind up getting something in-between OOP and AOP with the results.


have some examples?


It is also to note that features like closures and LINQ-like operations were already available in Smalltalk-80, which tend to be the most FP features used by average enterprise joe/jane developer.


Have a look at the paper. It's actually one of its conclusions that Simula had active objects even in 1961 and also in the OO version 1967, but for some reason this aspect didn't make it into the concept generally envisioned by OO today.


Here are some thoughts about OOP I had recently, maybe someone here had similar thoughts:

It seems to me that the biggest cause of maintenance problems in object-oriented programming is that it tries to manage global mutable states by partitioning and encapsulating all state into objects. However, the only way to prevent two unrelated objects from manipulating the same part of the state is to create a strict dependency hierarchy or arborescence[0] between all objects in the system.

This combination of data and code in a strict hierarchy means that all data access patterns are baked into this dependency tree, which makes later, unforeseen changes to the software extremely difficult without taking shortcuts in the dependency tree and thereby destroying the encapsulation, which would destroy the whole point of OOP in the first place.

[0] https://en.wikipedia.org/wiki/Arborescence_(graph_theory)


> his combination of data and code in a strict hierarchy means that all data access patterns are baked into this dependency tree

I believe OOP is really good at modelling state and really bad at modelling data.

With state (and side-effects modeled by state machines) you want a well-defined interface, messages, local retention and so on. This way you get the benefit of reasoning about state in a constrained manner both as the caller and the callee.

For example it makes sense to model Mario as a state-machine that receives messages from game-pad (pressed buttons) and collision events. Note that this is a conceptual, high level kind of modelling and not a concrete, structural one.

But data is about something. There is data about Mario at any given point in time: his physical dimensions, his velocity, consumed power ups and so on.

This is where OOP makes no sense at all. Reading and transforming data should be universal/uniform rather than guarded by idiosyncratic getters, setters and so on.

This is why SQL, JSON, XML, HTML, Unix filters, AWK, Functional Programming etc. are so powerful. These technologies provide/enable a uniform/universal way of reading data and composing transformations.

(As a side-note: I consider state that never leaks as an implementation detail, usually for performance)


> his physical dimensions, his velocity, consumed power ups and so on.

Aren't his physical dimensions just attributes of the Mario instance (or class definition)?

Similarly, his velocity, consumed power ups and so on can all be attributes of the stage he's in. His extra lives can be an attribute of this current game session.


Mario‘s attributes are a composite of different measurements that are partly needed in different contexts of the simulation.

They are just data. And like any data we should be able to query it without knowing Mario‘s special access pattern.


Usually that just shows the lack of understanding of how to use OOP, and what should be proper classes, what should be interfaces or categories, depending on the language.


Thats the point. There shouldn’t be any deeper understanding necessary at all to represent the simplest and most prevalent case conveniently and uniformly.

Many of the core principles of OO make sense in a stateful context, that doesn’t automatically make them a good fit for data as well.


Same argument can be done for FP or ADT (abstract data types).

Some people like to seat at the keyboard and code away, but don't expect stellar results without taking the effort to learn best practices, by reading books and such.


Well, quite, but that doesn't obviate the point. What you're describing has an exact corollary: "Saying FP is not as good at handling state as OOP just shows a lack of understanding of FP, of structures such as the state monad."

Yes, using strict patterns to deal with the weak points of the overall paradigm is absolutely fine, but it's patching over cracks rather than using the best tool for the job. OOP is very good at handling state but less good at pipelining data, FP is very good at pipelining data but less good at handling state.


Except that most modern languages are multiple paradigm, again it reflects more lack of effort into learning best practices than anything else.


Languages being multi paradigm reflects that people want to use the best tool for the job, not lack of learning. This stems from favoring pragmatism over style.

The conclusions about OOP and FP having their specific trade-offs are not made by people who don't learn, but by people who did and then applied different methods and see their trade-offs.


Except when the trumpet of FP vs OOP keeps being cargo cult, although most modern languages support a mixed approach.


But neither of us are talking talking about cargo culting FP or OO, I think you have a bee in your bonnet about something that isn't really related to what we're saying. All we're saying is that there are pragmatic reasons for using an FP approach or an OO approach depending on situation, that it isn't due to a lack of understanding.


Right, the lack of understanding comes from bashing OOP from FP angle, without realizing that all FP languages that left academia and are actually used across the industry support the same principles in one way or the other.

Statements like "I hate OOP, hence I do Clojure/Lisp", which I just hand over a copy of the "Art of Metaobject Protocol".

Wirth's Algorithms + Data Structures = Programs ideas seem to have fallen by the wayside.

It should be about learning concepts, not about feature X in language Y.


But neither of us were saying that, that's my point. I totally agree with you! But it's not bashing OOP from an FP angle or vice versa, it's that those paradigms (patterns I guess is a better descriptor, but that term has been co-opted a bit by OOP) each have a place. And within a multi-paradigm language both can be used together, and yes, dogmatic avoidance of one or the other is fairly widespread and a bit silly (and particularly silly when both are immediately available).

> "I hate OOP, hence I do Clojure"

Which is a bizarre stance. Rich Hickey has always said that Clojure has great OOP support, it just doesn't force an all or nothing approach on you. Do you want inheritance without classes? You can have that. Want interface? Dispatch? Encapsulation? Whatever OOP feature you want, Clojure likely has support for it (without dropping to Java, obviously you can use Java OOP if you really want, too), it just lets you mix and match the individual features and apply them as makes sense to you. And Common Lisp has CLOS.

So, yeah, I agree that the common "OOP bad, FP good" sentiment is missing the point.

With that said, I personally do believe that OOP is a bad default. By which I mean, in any typical non-trivial program I write, the amount of OOP that I feel makes sense is usually not the dominant part (but its still a significant part), that usually I don't want to hide data at all preferring data structures that can be accessed directly (like in Python or Clojure) and that most data processing is funcitonal-like transformations. I also feel that immutable state is the best default as mutable state should be carefully thought about and managed. Basically, in my opinion, OOP should be de-emphasized somewhat, but its still an important tool that I use in pretty much every non-trivial project.

For example, I've recently written a successful cryptocurrency trading bot where the trading logic is a pure function of market data, state of your orders and your strategies local state, running in a database transaction, and its output is a sequence of actions to be performed (orders to place/cancel/edit, new local state to persist to database, etc). It uses an OOP style architecture for its top level services and to abstract the different data sources and exchanges, but the actual decision making logic is a pure functional sequence of transformations (under the hood; what the user actually sees is an event driven state machine that generates the desired world state, which gets diffed with the actual world state to produce the required actions). Basically, an FP core wrapped in an OOP set of DI-able services. Its worked out very well.


Right, so if they're multi paradigm, then why would you not use FP features for the parts of a given application that play to FP strengths, and OO features for the parts that play to OO strengths? Why try to squash the former into the latter when you could just use the former directly?

I'm just struggling to see your point here. You're saying it demonstrates a lack of effort in learning best practices whilst also saying that it's a lack of understanding of how to use OO. If it's a multi paradim language and you ignore FP approaches that are fully supported in favour of OO patterns (to deal with the same thing!) I'm not quite sure that's a lack of effort learning best practices. It's more "I have an OO hammer, I'll hammer away with that"


Exactly, which makes a moot point about FP vs OOP, since the language supports both approaches.


If the majority of practitioners show this lack of understanding, then that's a problem with the model, not the practitioners.

That's a very interesting observation, that is very much aligned with my own views, but I'll need some time to absorb the details here.

I'm myself on the quest to fully understand what is really wrong with OOP and then try to share it with more people. It's actually very hard because it's a complex topic, ridden with subtleties, and even the exact definition of "what is OOP-code" is elusive. I've just recently made a reddit post that you (and other's who find your post interesting) might also enjoy: https://www.reddit.com/r/rust/comments/ju5oqp/how_to_avoid_r...

I wish there was some "OOP-deniers" community, where we could discuss things like that with an open mindset. Usually discussing OOP degenerates quickly and it's impossible to even establish some shared understanding.


One of my problems with OOP is that it is often hard to compose. One could argue that most of the OOP design patterns were made as an attempt to solve that. In FP, composition is done elegantly (at the cost of increased complexity in other areas, such as state management).

Scott Wlaschin has a great talk about that: https://www.youtube.com/watch?v=4jusLF_Xz7Q


Probably 90% of the problem is setting yourself up in opposition to OOP at all rather than preaching the gospel of what you consider to be good. Even if you don't have a weak argument it does make your conviction look questionable if the preamble always has to be about how dumb the other thing is.

A lot of the criticism of OOP could also use more steelmaning rather than picking on claims that often feel like the opposite.


True. Though I guess I don't have a chance to reverse the river made out of a constant stream of developers produced by universities and bootcamps, who think "Cat extends Animal and makes_sound with "meow" and Dog extends Animal and makes_sound with "woof".

I'll try to keep it in mind if I'm ever to make a book or something like this about this subject.


Yeah exactly what I mean, if your target is the understanding of OOP that comes out of intro courses at University then it's easy to knock down. To the point even OOP advocates complain about it.

Steelmanning for example would be taking successful, performant OOP projects and tackling the way they use it. V8, LLVM or UE4 for example off the top of my head.

Steelmanning is realizing that you can write data oriented code in OOP projects.

Steelmanning is realizing that you can write an ECS-style database with OOP. It's further realizing that the ECS pattern as described is not inherently data-oriented and requires extension to actually meet that criteria (e.g. archetypes).

And to be clear I'm not accusing you of ignoring these things but the discussion around this topic in general.

Basically as a game programmer of over 16 years of professional experience I feel a lot of this stuff is dubiously presented. And I don't even like OOP particularly!!


> Steelmanning is realizing that you can write data oriented code in OOP projects.

It's going to be arguing about names and definition, but I can't accept that it's OOP, when that put qualifiers on all "4 tenets of OOP", etc. Something like:

    - Abstraction(*)
    - Encapsulation(*)
    - Inheritance(*)
    - Polymorphism(*)
(*) where useful, when necessary, but ... generally avoid.

If someone could tell me which book is "The BEST OOP out there, then I would be happy to read it and then argue with it" :D


Well it’s kinda up to you how you construct your arguments but you’re not going to convince a lot of people without tackling the nuance that exists in the real world in favor of some kind of theoretical idealism. Otherwise you’re perilously close to constructing straw men.

The problem is that with a real-world project like V8, with its 2 million lines of code, it is impossible to tackle and discuss it in detail. What you really need is a model, i.e. a simplified version of reality that you can discuss, i.e. using Ockham's razor blade. Looking at V8 to understand OO is a bit like taking a balloon ride over a city to understand quantum physics.

Yeah I think people will need to dissect parts if they did that! But the point isn’t you need to critique V8 but that you need to critique how “good” OOP software is put together not beginner level explanations.

BTW. I very much doubt V8 is OO, other than having to support "JS objects" as a part of it's own data-orientation.

You could look at it and not guess.

I am a little confused about your point - are you saying that OOP is a strictly worse, or has some inherent irredeemable flaw that is not present in other programming styles? Because that’s a statement that would need a lot of supporting evidence, considering the success of OOP over the last 30 years or so.

I think a more strongly defensible position is that OOP is not a magic bullet that works for all types of problems, and identifying when and where it is a natural fit is not always obvious.


From my experience, the modern Java-style OOP that I've worked with (some of which I produced myself) through all these years leads to terrible code, and all the data-oriented code I've seen and produced seems much, much better in comparison.

Other programming styles are often orthogonal. One can have mostly functional OOP and mostly functional data-oriented software, or imperative OOP vs imperative data-oriented software and so on.

I'm mostly focused on OOP vs data-orientation. OOP means a fixation on modeling everything as "a graph of encapsulated objects with associated bits of logic calling each other" while data-orientation means: "pure logic decoupled from data, and a data stored and passed in mostly non-abstract, concrete ways to support required computation".

The success of OOP is dubious: mostly judged by it's popularity, which is mostly driven by it being a default methodology tough to people in school. When you look at the actual industry, most of the fundamental, long-lasting software is actually written in some form of data-orientation. From kernels, system tools, things like redis, other databases, etc. Though one could tell that it's because fundamental software is usually system software, and not business-logic software and that's true.

I don't have evidence, and it's all just my experience, reasoning and intuition, and I'm intellectually open for debate here, but IMO fundamentally splitting data into small encapsulated chunk that form a graph of native references to each other is just a terrible way to organize any form of computation. I don't want to repeat myself, so please see my reddit post that I linked before for more details.


RE: "The success of OOP is dubious: mostly judged by it's popularity, which is mostly driven by it being a default methodology taught to people in school."

It's worked fairly well for "isolated" groups of services, such as API's to file systems, network services, etc. Where it fails is domain modelling and any system or sub-system having more than a few entities. OOP doesn't scale when there is a large number of domain nouns and/or attributes involved.

For example, OOP worked quite well for early and fairly simple GUI's. But when GUI systems and applications grew larger and more complicated, OOP GUI's turned into spaghetti. I believe something more like an RDBMS is necessary to manage large quantities of nouns and attributes so that one can slice and dice their particular view and grouping as needed for different tasks: query the parts instead of navigate a parts graph. You can navigate graphs like a cave explorer, but you can't easily say "show me all nodes (caves) with such and such...". Perhaps this is the "data orientation" you speak of.

(I'm puzzled why in Java Swing the event handling code for a button click has to be fed into a "listener" instead of attached to the button object itself. The second is more logical in terms of thinking in the domain of UI's. Listener is an implementation detail that should be hidden away most of the time.)

On a really small scale, trees and nesting work fine. On a medium scale graphs work fine. But on the larger scale, relational or something similar is a better tool. The hard part has been getting RDBMS to manage "blocks of behavior" well (events, snippets, functions, etc.).


The seductive power of OOP has little to do with the technical merits. As you mention data oriented programming leads to better programs. Even when using OOP technology, programs usually tend to become data oriented (hello protos), and 'objects' become glorified pure functions.

The big OOP idea is the subject-verb-object interface, confusingly named object-method-arguments. The SVO structure matches the cognitive linguistic bias of the human brain, thus feels obviously right to most humans. Even if better programming alternatives exist. The endless litany of new Verber(args0).verb(args1) is here to stay.


Oh. I find subject-verb-object interface another dimension of debate altogether. Usual it's being mentioned by FP devs, since they are usually very aware of that aspect since they don't get to use it. :D

It's not only that SVO is more suited from linguistic reasons. Selecting the "object/doer/subject" first is just a better UX in all sorts of human-interfaces (e.g. games). It allows narrowing down the context naturally, improving discoverability (like auto-completion). Big part of the reason why I switched from Vim to Kakoune.

Subject serves the purpose of being a mini namespace/module and context for operations.

In a way, I blame FP community for being so stubborn in their mathematically-inclined notation, and not experimenting enough with trying to achieve similar human-friendly notation. But maybe I'm just missing all the historical attempts or something.

One of the most interesting spins on the topic is "subject-oriented programming" from Hoon (which is native PL for Urbit OS), where every expression is always evaluated in ever-present implicit subject context, and results in a new subject which will be used for the next expression.


Speculation. The FP community prizes mathematical beauty, for which symmetry is an important factor. It is simply ugly to promote one function argument (which one?) to a privileged role, even if the linguistic side of the brain screams that's the right thing to do.

Re autocomplete, there is no technical reason why auto-complete couldn't trim down the search space based on the arguments of the functions, for example by using reverse polish. But reverse polish is unusual, we really like SVO anyways, and modules as subjects are very useful, if only as an autocomplete anchor, in tooling or in one's brain. OOP/SVO is here to stay for the foreseeable future, and embracing pure functions principle makes it rather pleasant to work with. If only we could decouple 'modules as namespaces' useful idea from all the misguided baggage that comes bundled with OOP, of which the 'active data' antipattern deserves special 'please avoid' mention.


I believe languages should allow the programmer to define the relationship between one code block and another. This includes scope and state relationship. Think more powerful lambdas with better interface syntax and options. OOP languages typically hard-wire such relationships into a limited set of relationship conventions. This results in OOP graphs that don't fit the domain or need well.

Sure, one can do such in Lisp, but many find Lisp hard to read. With C-style languages, the "ugly" syntax gives helpful visual cues. Stuff within (...) is usually a parameter list, stuff within {...} is usually sequential code, and "[...]" indicates indexing of some kind, for example. I find such helps my mind digest code faster.

Lisp aficionados will often claim "one gets used to" the uniformity. But that's like arguing we don't need color because one can "get used to" seeing in black and white. To a degree, yes, but the color adds another dimension of info. Easily recognizable group/block types is another color.


I think adhering to the Solid principles prevents those problems. Single-responsibility should prevent two things working on the same data, dependencies should only point to abstractions and if you feel that a class hierarchy is hard to change favor composition over inheritance.


I honestly don't see how SOLID avoids the problem I mentioned, but let me clarify this:

When I say dependency, I mean it in a general sense, i.e. if a stateful object A uses a stateful object B, e.g. by composition, then A depends on B. So the behaviour of A does not only depend on the state of A, but also on the state of B. It does not matter if A or B are defined as abstractions, because in the end all object instances are concrete. These dependencies will always form a directed graph in some way.

You are correct that the principle of single responsibility prevents two objects working on the same data. However, as I wrote earlier, this means that the dependency graph must be a perfect hierarchy, also known as an arborescence or out-tree. A perfect hierarchy does not allow shortcuts, so if A depends on B and B depends on C and A wants to know something about C, we always have to go through B, regardless of whether we are actually interested in B or not. This is what I mean when I say that data access is baked into the dependency graph, making it difficult to adapt OO programs to new requirements.


But the alternative is that more data needs to be exposed and then if you change that data all the functions that operate on it need to be changed. This is a well known problem. With OO you can change some of the data/state (and the resulting behaviour of the system) with just a local change. But if you want to change the functions then you are out of luck.


[flagged]


This sort of attack will get you banned on HN. Could you please read https://news.ycombinator.com/newsguidelines.html and stick to the rules? Note that they include this one: "Be kind."


By now, most of the classes I write are immutable and I use inheritance rarely (although I do use interfaces).

The one feature that I still find very useful from OOP is the ability to have a "shared context" for a related set of methods/functions. This works especially well with dependency injection: I can have some properties passed to the class constructor (say: the base directory where some files lie, a logger implementation, etc.) and then write my file manipulation functions without having to pass those properties around from function to function.

I wonder if there is something similar in some FP languages.


The reasoning behind "closures", is that one can capture some context such as the example you describe. If we think of an object as simply a record of functions, then we can close over some shared context during the construction of such records, if we have a language with records and closures.


Look up the Reader monad. Don't be intimidated by the M word, it's a simple pattern which make use of function arguments but masks the argument drilling.


Yes, it's called a closure.


Others have mentioned closures; there are also a few other patterns that can do similar things.

One is "ad-hoc polymorphism", which lets us associate values with types. This is essentially the same as implementing an interface in languages like Java, except we don't need to modify any existing definitions (e.g. we don't need to edit a class to say "extends Foo"). This is useful for picking between pre-defined sets of values, e.g. for your example of logger and base dir we can define an interface like 'Environment' with methods returning a base dir and a logger; then we can define a 'Production' type which implements this interface by providing the real logger and base dir, a 'Test' type which returns a temp dir and stdout logger, a 'Dev' type whose logger keeps verbose debug messages, etc. We can write our code in terms of a generic Environment, then in our 'main' function we specify that we're using 'Production'; in our test suite we specify that it's 'Test'; when debugging we can specify 'Dev'.

Another approach is called 'Reader', and it's based around the following type and helper functions (in Haskell syntax):

    data Reader a b = R (a -> b)

    read :: Reader a a
    read = R (\x -> x)

    runReader :: Reader a b -> a -> b
    runReader (R f) x = f x
Or in Scala:

    final class Reader[A, B](f: A => B) {
      def run(x: A): B = f(x)
    }

    final object Reader {
      def read[A]: Reader[A, A] = new Reader(x => x)
    }
Note that this type literally just contains functions from some type A to another type B. The function 'read' is a wrapped-up identity function (returning its argument unchanged), the 'run' function just applies a wrapped-up function to an argument.

This gets interesting if we do two things. First we can specialise the input type (a or A), e.g. in Haskell:

    type Application t = Reader (Path, Logger)
Or in Scala:

    type Application[T] = Reader[(Path, Logger), T]
The type 'Application[T]' represents functions from a Path+Logger pair to a T.

Next we can define some functions for manipulating Reader values:

    -- Applies a function f to a Reader's result; AKA function composition 
    map :: (a -> b) -> Reader t a -> Reader t b
    map f (R r) = R (\x -> f (r x))

    -- Wrap up a value into a Reader, by accepting an argument and ignoring it
    wrap :: a -> Reader t a
    wrap x = R (\y -> x)

    -- Combine two Readers, so they're given the same argument and their return values are paired
    product :: Reader t a -> Reader t b -> Reader t (a, b)
    product (R f) (R g) = R (\x -> (f x, g x))

    -- "Unwrap" nested Readers, by passing the argument into the result
    join :: Reader t (Reader t a) -> Reader t a
    join (R f) = R (\x -> runReader (f x) x)
Together these functions let us write arbitrary functions "inside" this 'Application' type. For example:

    -- The 'fst' function gets the first element of a pair
    baseDir :: Application Path
    baseDir = map fst read

    -- The 'snd' function gets the second element of a pair
    logger :: Application Logger
    logger :: map snd read

    -- Use the logger to log the given string (via some 'logWith' function)
    log :: String -> Application ()
    log msg = map (\l -> logWith l msg) logger

    -- Read the contents of a given file from the base dir
    openFile :: Filename -> Application String
    openFile f = map (\d -> readContents (d + "/" + f)) baseDir

    -- Log the contents of a given file from the base dir
    logContents :: Filename -> Application ()
    logContents f = join (map log (openFile f))
Note how we can only access the base dir and logger from 'inside' the Application type; i.e. we can get an 'Application Logger', but we can never just get a 'Logger'. That's because the code doesn't actually define a logger or base dir anywhere; any value 'Application T' is really just a function '(Logger, Path) -> T'. To extract a result, we need to give it to 'runReader', which (in the case of Application) also needs to be given a '(Logger, Path)' pair.

Using these functions directly can be a bit tedious. Some FP languages have special syntax which makes it nicer, e.g. in Haskell we can write things like:

    do x <- foo
       y <- bar x
       baz x y
The Scala equivalent is:

    for {
      x <- foo
      y <- bar(x)
      z <- baz(x, y)
    } yield z
These get rewritten to (the equivalent of):

    join (join ((map (\x -> map (baz x) (bar x)) foo))
This API is a combination of Functor (map), Applicative (wrap and product) and Monad (join). Hence people sometimes call this "the Reader monad"; but don't let that scare you!


Thanks, I do know some Haskell, but I never dug deep enough into it to fully understand some of the concepts such as Reader. It seems to at least approximate the concept quite well.

Still - I don't know whether it's just due to unfamiliarity, but it seems to me that the simplicity and readability of the "OOP" equivalent (not requiring you to understand the semantics of "map" and "join" in the context of Reader), are a bit better. Also - OOP languages allow for dynamic dispatch and if I'm not totally mistaken (but I might), Haskell at least doesn't allow that unless you start adding language extensions, so you can't even make something like `Logger` a type class. I guess that would work in Scala, though, but then again, Scala also supports my original "OOP" approach anyway.

The ad-hoc polymorphism idea doesn't really solve the issue, I think, because even if you bundle up all the values into an "Environment" variable, you still have to pass it around from function to function.

Others have mentioned closures, too. I'm not sure if they mean something like:

   data UserRepository = Repository { load :: UserID -> IO User, save :: User -> IO () }

   mkUserRepository :: Directory -> Logger -> UserRepository
   mkUserRepository baseDir logger = UserRepository { load = load, save = save }
       where load = ...
             save = ...
I mean, I guess that works somehow, but now we've made it possible to create user repositories with weird semantics (I know you hide the default data type constructor from other modules etc., but still). It also makes the implementation weirdly separated from the data type. And it makes me wonder... where would I even add the documentation for what the functions do. Having top-level functions certainly seems preferable to me.

My point is, I think nothing would stop a PFP language from theoretically allowing to have some sort of "module scope" where we pass some dependencies to the module that can then be re-used internally. The module could even be represented internally as a data type + a constructor function, but it would allow you to use top-level functions that can use those arguments. It shouldn't break any of the correctness / purity guarantees.

E.g. (just a quick sketch):

  -- UserRepository is simultaneously a module, a constructor function and a data type
  module (logger :: Logger, baseDir :: File) => UserRepository where

  -- ad-hoc syntax to show that UserRepository is
  -- an argument to the function at call-site, but
  -- it's looked up "implicitly" from the context
  -- within the definition
  load :: (UserRepository) => ID -> IO User
  load id = loadFrom baseDir id
      where loadFrom = ...
  
  save :: (UserRepository) => User -> IO ()
  save = ...
and then

  import UserRepository (load, save)
  
  main = do
    let baseDir = File "/some/Directory"
    let logger = Logger "/some/Directory/logfile.log"
    let repo = UserRepository baseDir logger
    user <- load repo 1
    ...
Maybe PFP-ers wouldn't find that useful, but I think I would find it very readable while essentially only amounting to "syntactic sugar" without having to break language semantics. I don't want to claim that Haskell should do this, though. :)

I also didn't want to claim that languages like Haskell don't have any way of doing this, just that, from a readability / simplicity point of view I find this "OOP" feature to be very useful and I haven't yet seen anything in the Haskell world that seems comparable _to me_ ergonomics-wise.


I think what you are asking for are first-class parameterised modules, which is certainly a feature that OOP has that many FP languages don't. It does exist in some FP languages though, for example OCaml and the 1ML paper and language.

Most FP languages are already incredibly good for the amount of funding and development they have received. They could be so much better if industry / US big-tech actually started investing in them or the ideas behind them.


Nice. I should take a look at OCaml maybe.

There's an article from 2000 called Why OO Sucks, by Joe Armstrong:

http://www.cs.otago.ac.nz/staffpriv/ok/Joe-Hates-OO.htm

It was originally discussed here at:

https://news.ycombinator.com/item?id=19715191


Nonetheless, Erlang is closer to the original OO as implemented by Smalltalk than C++ or Java.


After using Smalltalk for a while, even JavaScript felt closer to the original OO than anything in the spirit of C++ or Java.


Same story repeating over and over again.

Initial hype. Then abuse and overuse. Disillusionment. Decline. At some point people will finally figure out what OOP is all about by looking up decades' old works and youtube videos (if youtube is still available) and hopefully will learn to integrate it.

*

A discussion with a dev recently when I started refactoring his PR with him and started pulling business logic from services into model.

Dev: "NOOO!!! You can't do this!!!"

Me: "But why?"

"You can't put code in model. Model is for fields and getters and setters only and business logic belongs in business services"

Me: "Why?"

Dev: blank stare... (imagined, this was over zoom)

Me: "So... why do you bother with having fields private if you plan to have public getters and setters for everything?"

Dev: blank stare

Me: "Isn't object oriented programming about having logic alongside the data rather than separate? Isn't it about modeling operations that act on the data to limit exposure to internal state?"

Dev: blank stare

Me: sigh...


Just to set some context, I'm of the "OOP Generation" where I learned to program when OOP was at its zenith. Every major language was Class-based OOP (C++, C#, Java, Python, Ruby) and it was just THE way to code for all things in the foreseeable future.

That said, I've since looked into and learned other methods of programming: classic C, prototype-based JS, functional Clojure, and Go.

From what I can gather, even from the SOLID principles and other Class-based OOP advice (prefer composition over inheritence) is that Interfaces (as abstraction and reuse), Aggregation (that objects can have other objects), and Delegation (which allows of ergonomic aggregation) are the real winners of OOP.

In this way, Go gets it mostly right IMO, though at the risk of "initial hype" I'm still cautious on it.

It feels so backward that OOP is taught with a focus on abstract & subclasses, with very little emphasis on interfaces, which is really how we achieve the goals of abstraction and reuse.

What's funny is seeing languages like Java trying to "undo" the arguable "mistake" of abstract classes by adding more and more power into interfaces: default methods, etc.

To your original conversation, there's a mix going on with "model." Are they talking about the Service's API Model, the internal model representation, or the database model? The outer layers (API & Database) of your code should be plain structs (POJOs in Java), but you're absolutely right that within the business code itself, you should have logic within the classes. Otherwise, you're not coding much different than C passing around struct pointers.

Related terms: MVC, Data access object


I'm a Pascal programmer, so I come from a different heritage.... anytime I go to look at OOP in C++ or Java, etc... I see a fog of things about factories, and namespaces, and far, far too much code just to set up objects, which are glorified data structures with the code that manages it.

I strongly agree with your conclusion that Composition, Interfaces, Aggregation, and Delegation are the real value in OOP.

They all play very nicely together. Composition avoids the need to restructure everything into a single hierarchy.

Interfaces make it possible to separate concerns, which lets you treat each library as a black box.

Example: Imagine the electrical breaker panel in your home. The exact principles that cause the breaker to trip are irrelevant, nobody cares if it's thermal, magnetic, hydraulic, or whatever new tech comes down the block... all that matters is that it does the job in the described manner. It meets the goals of its interface.

In hiding implementation behind the interface, you fend off premature optimization, and stave off technical debt by keeping things small enough to refactor at will. It is this standardization that allows you to add a Ground Fault Interrupter to a box that was designed before they became available.

Aggregation is how you can have a series of fields in a form, all of different types, yet it all is a form. If one of those fields has a list of strings... nothing breaks, and you don't have to re-arrange everything to make it work.

Example: Your breaker panel has any mix of 120 and 240 and GFI breakers, some even do 3phase power.

Delegation is how the fields in a form let the details of where on the screen, and how to resize, just work.

Example: Your breaker panel has X number of slots, provides a UL approved housing for them, with a reliable User interface.

OOP isn't the problem, the teaching is.


> What's funny is seeing languages like Java trying to "undo" the arguable "mistake" of abstract classes by adding more and more power into interfaces: default methods, etc.

Default methods are how Java-like languages are able to have mixins. They are extremely handy and quite safe to use as far as I can tell.

golang on the other hand shows its extreme weakness in modeling non-trivial domains, and the code base quickly becomes very verbose and tedious to work in.


You need to read up on Anemic Domain Model (https://martinfowler.com/bliki/AnemicDomainModel.html).

I see a lot of chaos in your post.

Model is a model. Model is there to represent an aspect or aspects of a real system (https://en.wikipedia.org/wiki/Conceptual_model). An object called "Employee" is a "model", a representation of certain aspect of an actual employee.

Object oriented programming is about using objects -- models (representations of aspects) of actual systems, actors, objects, etc. In OOP you build the model from operations that define the object. The fields are not part of the interface, the operations are. That's why everybody says to make fields public (but for some reason not everybody remembers why).

So a method call fire() on an object Employee is more "object oriented" than having HREmployeeService.fireEmployee(employeeId) to set field employedTo directly on EmployeeDAO.


I think OOP is great, but one weakness is that a lot of abstract concepts don't really map to objects. And then people have flamewars about where to put those concepts.

In the real world most employees would never fire themselves and HR would update things like payroll which the employee isn't allowed to change. Your example makes more sense as a justification for keeping a service layer.


Abstract concepts map very well to objects. The problem of mapping a business domain to computing structures isn't unique to OOP.

In this case, I'm slightly gobsmacked that no-one pointed out that an employee is not their job. The contract of employment is a separate domain concept. So is the invocation of the clauses of that contract. Termination involves invoking a particular contract clause.

The job is to model a) the contract, b) the invocation of a specific clause, c) the record of that invocation (including its authorisations and so forth), and d) the consequential processing in other systems.

Whether you're working in FP, or Kay-style OOP, or Java-style OOP or, heck, SQL stored procedures, these are separate concerns, separate concepts, separate units of code, separate records, hopefully loosely coupled by whatever idiomatic form is at hand.


> Abstract concepts map very well to objects. The problem of mapping a business domain to computing structures isn't unique to OOP.

Yes, I agree in theory. What I meant to say is that many concepts don't map to physical objects.

> In this case, I'm slightly gobsmacked that no-one pointed out that an employee is not their job. The contract of employment is a separate domain concept.

This is what I mean by flamewars. Your solution sounds good, but there are other people on this same thread still arguing that it makes perfect sense for an employee to fire itself.


The litany of category and analytic blunders on display in that subthread is remarkable, but it’s hardly intrinsic to OOP, and mostly attributable to inexperience.

As with almost everything, when you try to stretch an analogy or use a single tool for every job, you are going to run into problems. Every tool has limitations and so does OOP.

Employee object is not a representation of the will of the employee. It is an interface with operations that are best naturally acting on an Employee object -- changing its state.


I suppose a manager is also an employee, but Manager inherits from Employee with a .fire() method seems wrong, no? I'd expect a method which takes another object or ID, not to change the state of the object with the fire() method.

Anyway, this is why I stay far, far away from OOP when I can - vast amounts of time spent on these questions which generate zero insight (at least the mathematical obsession with some parts of FP can be fun)


I agree with nimblegorilla: an employee that fires himself does not make sense. HR fires them, or some other entity, whichever has the power and permission, does so.

In the end, no matter what example you will pick it will always have the same kind of problem, that a higher entity is needed. And in the cases where that is not true, no encapsulation is needed. For example, "employee.age()". You can just do "olderEmployee = copy(employee, age = employee.age + 1)" (pseudocode) from the outside by having the age exposed (which is not classical OOP at all).

But I'm curious if you can find a counter example. Usually it stops working when global constraints come in.


after doing some smalltalk i think it would be Employee doesn't fire himself, you send him the fire message Employee fire: "immediately" so I kind of see where he's coming from here...


Yeah but that suffers from the problems already mentioned - an employee doesn't fire themselves.


an employee isn't firing themselves, they are being sent the message they are being immediately fired


Sure - then what happens? They notice that they have been fired, but they don't remove themselves from the payroll, or anything like that. So what does the message do?


What do you do when you're fired? You collect all your belongings, wipe your machine, and leave.

Sure, someone could do this for you, but how would they know that you've left a tracksuit in a locker in the hallway or some cookies in the fridge? Unless the company manages a lot of bureaucracy that keeps track of every single object or resource belonging to the employee.


> What do you do when you're fired? You collect all your belongings, wipe your machine, and leave.

I'm sure you meant to say, the you tell all your belongings to collect themselves together, the machine that it better wipe itself and then you tell yourself to go through the door...


Haha, smart one. The point is, you can perform actions on what you own, and in turn you can ask what you own (for example a laptop) to perform an action on itself or its belongings.

You don't need to know how to handle a filesystem and a specific brand of hard drive to ask your laptop to wipe itself; the laptop in turn doesn't need to know all the internals of the hard drive to ask it to perform that action.


I agree with that. But my answer was for https://news.ycombinator.com/item?id=25118437

> So a method call fire() on an object Employee is more "object oriented" than having HREmployeeService.fireEmployee(employeeId)

Which doesn't make sense - ignoring the "to set field employedTo directly on EmployeeDAO" because that is unrelated.


> What do you do when you're fired

Oh, that's another point. You wouldn't send the message "fire [yourself]" but rather "[you are] fired". Do you agree?


I do. But the message you replied to was specifying that already:

"they are being sent the message they are being immediately fired"

To which you were asking "So what does the message do?"


Fair enough. I probably interpreted this response wrongly.

In the real world (in our program code I mean), such a message would probably rarely being sent, because there is no action to be done on our model of an employee - I guess that's why I got the answer wrong. My bad.


the message sender can then send the remove payroll message to the PayrollDepartment

Employee fired: "immediately" PayrollDepartment remove: "billy"


Getters were popular with automatic tooling. They were intended for only UI models. In Java this was called the Bean Standard. https://en.m.wikibooks.org/wiki/Java_Programming/JavaBeans


Don't get me started on all the helpful tools that make it easy to automate antipatterns.

Heard fields should be kept private? Don't know why? Lombok to the rescue, let's generate all those suckers up when we could just as well made fields public.


The rationale is the same as that for an abstract data type. You don't need to know how a class structures its properties and directly coupling to a representation can burden future maintenance. This very rarely comes up, but when it does it is nice to change a getter instead of tracking down all the locations where it was directly coupled to the representation. It isn't at all obvious when you are writing a class whether future maintenance will need to change the representation or not.

This isn't as much a big deal in python, for example, where you can @property your getter and setter. In that way you can easily migrate existing code from direct data structure coupling to indirect/computed access.


.. except when manipulating proxy-ed objects, like Spring beans or ORM detached entities. The object you're calling the setter on is not the object (struct) holding the field..

I know, yet another idea thrown into the concepts cocktail.. (not saying these are correct justifications for the getter/setter pollution, just reasons why you find them everywhere in the wild)


It is better not to put business logic into data model - in order to:

1) Keep your business logic simpler.

2) Keep your data model simpler.

If you put both business logic and data model in the same class, then:

- When you investigate data references pointing to your class -- you will see references noise from your business logic.

- When you investigate business logic references pointing to your class -- you will see references noise from your data model.

3) Combined "business logic + data model" class is much harder to refactor.

So, technically, you can combine business logic and data model in the same class.

But practically, such code combining will significantly complicate maintainability of your code.


There is no relation to the paper. Recommend that you first take a look at it.


I keep going back and forth on OOP. I was really into about a year ago, but have now adopted mostly functional except for large projects.


No one, not even the supposed originators or the terms or the ideas, knows or can agree on what OOP even is or means.

So when codebases coded in the ubiquitous "OOP" style, (essentially procedural code full or state, mutation, temporal dependencies, side effects, exceptions and other things which make it impossible to reason about what's going on) are full of defects, the advocates can always claim it's because it's because it's not Real, True Object Oriented Programming.


Super proud to see work like this from Portland State! Without entering into the debate of Object Oriented Programming vs. Not OOP, I think legitimate research and examination is important for progress on both sides. Excited to do more than just skim this after work.


The paper is from 2012, and it is not actually about research results, but a historical outline, even with a few errors. But still interesting to read.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: