Hacker News new | past | comments | ask | show | jobs | submit login
Why Are So Many Developers Hating on Object-Oriented Programming? (thenewstack.io)
132 points by ingve on Aug 23, 2019 | hide | past | favorite | 293 comments



Have you seen bad OOP code? It's an unreadable mess of ObjectManagerFactory classes where everyone mutates everything.

Have you seen bad functional code? It's an unreadable mess of monads, flatmaps and partial composition where you're not really sure how many arguments a curried function is whitholding.

Have you seen bad procedural code? It's an unreadable mess of nested if-else statements where you're not really sure what state the world is in.

There are no silver bullets, only different solutions to different problems


I feel that it is in fact possible to learn from history, and what humans tend to be good at and bad at, and derive useful directions for the future. "Use the right tool for the job" and "no silver bullets" are certainly tautologically correct, but they seem to miss the point of these discussions: we have a choice in what paradigms we teach, what frameworks we write, and what tools we use. How do we choose the right tool? These platitudes often have the (unintended?) consequence of effectively serving as veiled gate-keeping: "silly novice, you thought learning X was sufficient? No no, you must learn all the possible paradigms, and at some point you will become aware of which one to precisely apply for any given problem. Me merely telling you the solution would be cheating!"

Practically speaking, in my opinion, beginning teaching programming with an OOP approach can be detrimental to students. When I see students work in procedural (not even functional!) environments, they seem to piece together "the puzzle" of the programming task, whereas in OOP environments they often seem to devolve into strange taxonomical metaphysical questions "Is a Car a Vehicle? Or does it implement Drivable? Is a Window a Widget itself, or does it only own Widgets? If it's a Widget itself, then could a Window contain sub-windows?" So at least for the very constrained question of introductory languages and techniques, I think there is nothing wrong with specifically calling out certain approaches, vs. vaguely attesting to the truism that you can write bad code in any language. This feels similar to when people discover the concept of Turing Complete, and thus conclude all languages must thus be equivalent and any programming language discussion is thus moot. No. If that were the case, we should have never bothered going past assembly.


> How do we choose the right tool?

A good rule of thumb is that your code should be as close as possible to a plain-English description of what you want the code to do.

So if you're trying to write a program like "When you click the button, it turns grey, and then sends the message, and then turns blue when the message is done being sent", imperative programming is probably your best bet. If you're writing "find all the posts that contain the word 'donut'", something like SQL or functional programming is probably a good fit. You can usually tell when you made the wrong choice when you end up needing to think really hard about how to fit the behavior you want into your paradigm, or when you have to write a big long comment saying, "This isn't very clear from the code, but here's what it's trying to do..."

And as GP says, there isn't any one paradigm that's good for everything, so well-written programs will often use a mix of them. If someone tries to tell you that one size fits all (all programs should be imperative C! all programs should be object-oriented Enterprise Java Beans! all programs should be functional TypeScript!), that's usually because they aren't a very good programmer.


Except there's no book or course or whatever that can teach somebody these things. Instead, everybody has to figure it out on their own, and usually they just don't. Engineering disciplines don't work this way. I don't understand why it's okay for software development.


University curricula often try to expose students to a wide variety of programming paradigms. I'm not sure how you could walk out of Berkeley's CS 61 series thinking that there's One True Way to write all programs, for example.


There are books and courses that try, eg https://www.info.ucl.ac.be/~pvr/book.html


This looks great, thanks. Definitely gonna give it a read.


Very well put, that's why I cringe every time a fellow soft dev considers himself an engineer


> How do we choose the right tool?

We choose the right tool the same way a painter chooses among oil paint, watercolor, etc. Step back and look at the subject, and think how well each type of paint would represent it. Also consider one's own skill with each type of paint, one's interest in exploring new paints, and the cost of obtaining and storing another kind of paint.


Right -- this is exactly the kind of language that turns people off. It is a vague analogy with no practical advice that compares the act of programming to the art of a skilled painter, giving a field that strives to have the restraint and responsibility of engineering a sort of weird "you have to hear the music" feel.

I want to make clear that I do not necessarily disagree with your assessment, but understand that there are people who are deciding whether to pick up an FP book or OOP book. They have to start somewhere, and this "wholistic" advice isn't tremendous help. Similarly, as an industry, it's hard to translate what you just wrote into the next UI framework. I think we can agree that we feel comfortable enough with loops to not take a completely neutral stance on the "loops vs. lngjmp opcode" debate? If someone asks you whether they should uses gotos everywhere it isn't strictly necessary to say "depends on the problem!" given that they probably aren't writing assembly-level filters for Adobe Photoshop where using a goto might actually make a difference -- since they probably wouldn't be asking in that case?

This isn't even an "all things being equal" situation. We are actively encouraging OOP to entry-level programmers today -- it is fair to ask whether that is the right move, because doing nothing is actually just choosing that OOP should be the default. It is totally fine to say "maybe OOP has not delivered on its promises, and while there may be useful lessons here, I think its safe to say that its probably not the tool you should choose FIRST for any given problem."


I suppose it's an aside to your main point, but after 15 years of doing this stuff (and learning to code by writing procedural PHP first, back in the 4.0 days), I've spent a portion of it investing in OOP and now I'm starting to take a step back from it, and I'm having much more fun writing code without having to worry about classifying the code I'm writing.

The things that OOP does well can be achieved quite nicely using other programming paradigms: you don't need a class system or a hierarchy to encapsulate your state and behaviours. And a good OOP language gives you the resources to model your behaviours in functional and procedural ways so long as you have first-class functions and good support for expressions in favour of statements.

OOP fits much better into an organisational model where you can think about domains, roles, and responsibilities, and all of a sudden your Factories, Validators, Services, Commands, Builders, Entities and Repositories map to the same way of thinking your hierarchy of teams and managers does, with all of the overthinking and bike shedding that comes with it.

You could build the same feature without all that but I think you need an experienced and disciplined team to pull it off well. And I think that supports your point: if the entry point to programming wasn't OOP (like with Java or PHP or even Ruby) but something more fundamental, then maybe that gives you the opportunity to decide when you need the power of abstraction that OOP gives you.


What do those entry-level programmers want to make? They should start with something suitable for that, even if it's harder to learn. A motivated novice will do better at learning hard things than an unmotivated one will at learning easy things.

But I'm getting out of my element here; I don't interact with entry-level programmers that much. For all I know it may be good for that group that we have fads. If the crowd decides that now we're always using composition and never inheritance, they'll get to fully explore composition. Emphasizing a single skill for an extended period is probably good training.

But to get to a more concrete example, say you're not a novice and you're making a static site builder. If you go with the fad of avoiding inheritance you'll blind yourself to the fact that a typical static site has a collection of page types with is-a relationships, totally ripe for inheritance. You'll probably go with the "layout attribute" model of grouping pages and miss the opportunity to make a static site builder that's better than the others.


> We are actively encouraging OOP to entry-level programmers today -- it is fair to ask whether that is the right move, because doing nothing is actually just choosing that OOP should be the default. It is totally fine to say "maybe OOP has not delivered on its promises, and while there may be useful lessons here, I think its safe to say that its probably not the tool you should choose FIRST for any given problem."

the following is my anecdotal opinion but...

that is mostly an economic effect caused by the java “pop culture” and education being seen as a way to “get a job” which is then self-reinforcing (more educated in java, bigger talent pool, increase in default job requirements, more educators by default choose java

economics is a huge factor in this “i hate oop” bandwagon because java isn’t even “good oop” (again my admitted bias) and people dislike being toldwhat tools to use, either by educators and employers when they can see much better ways to do things than DataManagerFatoryFactoryInjectionContainer<T>...


This. Java took too abstractions to absurd extremes. Contrast that with something like python which does things much better and uses a nice data model.


In Python, specially after 3, everything is an object, easily observable by applying dir() on them.

The only thing that fails short of Smalltalk are the conditional and looping operations.

Python also offers multiple inheritance and proper meta-classes.


What courses or universities actually start with OOP? In my uni we started with functional programming (Scheme) and then went to C, only then we learned Java, C++, Prolog and so on.

In another uni when I supervised some exercise classes all students started with Python but using only imperative paradigm.


In high school AP Computer Science uses Java (20 years ago when I was in high school, they were just transitioning off of C++). You can see what the earliest programmers are being taught here: https://apcentral.collegeboard.org/pdf/ap-computer-science-a...

Edit: When I was in HS, the AP CS courses (now there is only on since the second one was discontinued) used the "Marin Biology Case Study" ( https://en.wikipedia.org/wiki/AP_Computer_Science_A#Marine_B... ). It has apparently been replaced with the "GridWorld" similar case study. I think it highlights one of the issues of OOP: a fascination with modeling, or perhaps over-modeling, the real world. Throughout the course you are expected to develop deep knowledge of a myriad of Fish and Fish Tank-related classes, which while perhaps an accurate preview into what working in an environment like Java is like, is not something that in my opinion succeeds in either exciting most students nor teaching them the "fundamentals" of algorithms and such. I think it can very much feel like a very complex and convoluted exercise in meeting the abstract goals of "categorizing" the world. Ironically, the insistence on modeling the world makes it feel frivolous and disconnected from any practical use -- which is incredibly sad since IMO CS could be the most practical math/science course you take. In my experience when you show kids the cool things that they can get computers to do for them, it can be more inspiring -- such problems do not lend themselves well to formalized class hierarchy drawing though.


They are back to two, and the new one appears much further down the “not only OOP” path. It’s more about the grand ideas of CS and less about programming. https://en.m.wikipedia.org/wiki/AP_Computer_Science_Principl...


Went through AP comp-sci 20 years ago too. We learned with C++ via DJGPP. Can't recall if the AP test was in C++ or pseudo code.. Certainly covered data structures, recursion, graph traversal, and etc.

High School. Something to consider for those complaining about basic data structure questions in interviews :)


Don’t most university courses worldwide start with Java - or use Java as a mainstay? There are practical benefits: everyone knows it, it’s easy to learn (the language, while the library is straightforward too), and there’s a huge ecosystem.

...and it’s OOP.

Back when I was finishing HS and touring ~5-6 different universities - every single one said they used Java - one said they were about to switch to teaching most of their first two years using JavaScript (this was a few years before NodeJS was even released so the notion of using JavaScript as a serious didactical language wasn’t taken seriously by me.


It rings true to me, at least in CS and especially software engineering. In mathematics, physics and "real engineering" departments I haven't seen it at all, though, and my experience may date me.

It was my first semester of university programming in '05. Anecdotally, I wasn't as bothered by "public static void main string args" as I was by

- Starting execution in MyClass.main,

- Instantiating a MyClass, then

- Calling MyClass.run().

That took some head-scratching. It's an argument against Java as an introductory language, but not much of one, because "Just ignore that bit until CS102" (or forever if you're never going to do another CS paper) actually works pretty well.


> It rings true to me, at least in CS and especially software engineering. In mathematics, physics and "real engineering" departments I haven't seen it at all, though, and my experience may date me.

Ah - yes. My friends at uni doing physics said they had to learn C++ - but only enough to write simple C-style programs and using simple classes without going into OOP design, design patterns, and large-scale concerns (I guess because they were all using some physics tool, and I understand it's good for interfacing with Fortran too).

My friends doing EE (excluding those doing embedded work) said they all had to do Matlab, some friends doing mathematics reported the same. EE people also said they did some C too.


In my school currently they offer intro to programming in Python. For those who already know the basics the first course is using C. The next one is C++ and after that Java.


Today, Java and, increasingly, JavaScript have supplanted the Pascal, BASIC, and C kids in the past started with.

Scheme is cursed. No one wants to teach it, especially to beginners. MIT abandoned it in favor of Python for its introductory course more than a decade ago, and even the Racket folks are backpedaling hard on Lisp syntax, proposing a conventional infix syntax as the default for Racket2.


My university began with OOP.

I had to discover FP myself! Uphill both ways!


My uni had a semester-long course on Haskell which I took - but it was mostly about the syntax, understanding `foldr` and writing trivial programs - I can’t say I really learned anything from the experience - I guess because we never did any large-scale examples (I dare say “real-world examples”). Yes, we can do quicksort in 3 lines - but how can I use Haskell for an OLTP workload? How can I use Haskell to program a GUI? (More to the point: how do we reconcile the mental-model of a GUI (the canonical example of mutable OOP!) with Haskell’s immutability?

Unrelated: Haskell needs a more approachable alternative syntax. People bash VB.NET - but it’s a great way to transition people from beginner and PHB-friendly languages (where people completely unfamiliar with a language can still generally understand what’s going on, like Fortran, COBOL, the BASIC family, etc) into an environment where the full .NET library is available before fully transitioning to the alien-seeming curly-braced C#.

...even as a C# programmer, I prefer writing functional-style code with immutable data. I’m not so far as to write the core of a program using Reactive Extensions - but things like Linq - for example.


  module Main {
    IO<Void> main() {
      Integer n <- read$(getLine) // or: fmap(read)(getLine)
      print(fib(n)) 
    }

    Integer fib(Integer n) { 
      Integer go(Integer !n, Integer !a, Integer !b) {
         if (n==0) {
            return 0
         } else {
            return go(n-1, b, a+b)
         }
      }
    }
  }
pros: it looks plausibly like Java. cons: all the things that haskell syntax makes easy like partial application become obscure at best. omg, i'm dying in squiggly and round brackets, what is this, lisp? there's too much stuff that has to become ad hoc builtin syntax, like <$> becomes $(...). i don't know if you can program applicatively with this syntax, but i doubt you could have invented it.


Based on your last paragraph, it sounds like you are describing F#.

F# is an immutable by default functional-first programming language in .net.


in retrospect, i wonder if you're proposing an alternative (lazy) language on the ghc runtime with the ghc calling convention, that can share types and code, so you effectively have a lazy java-like language with ghc interop?


My degree in the mid-90's used Pascal on the first year, than moved into C++, which back on those days meant 2nd year students would all be implementing their own string, vector, list, double list, priority queue, ... classes.

C was never though, it was thought to be mostly relevant for UNIX coding and students could anyway get into it via the C++ lectures.

Then there was Prolog, Smalltalk, Lisp, Caml Ligh (OCaml's percursor), Algol, x86 and MIPS Assembly.

Those that opted into compiler design/software engineering lectures also got into a myriad of other programming languages even if only as overview of what used to exist out there, namely Ada, Eiffel, Sather, Oberon, BETA, PL/I,...


There’s little point in teaching either functional or procedural programming, as they’ll both be more or less covered when you’re learning Object Oriented programming.

There’s nothing preventing certain functions from being procedural or functional.


If you disagree, I’d love to know why.

Every time I see a post like this, or my comment get downvoted, I read the Wikipedia article again, and am forced to conclude that no, I still don’t appear to be wrong.

- Procedural programming: programming with procedures (e.g. functions)

- Functional programming: programming with only functions

- OO programming: programming with procedures and classes.


> I still don’t appear to be wrong.

Procedures are not functions! I can understand the confusion since both Java and OCaml will let you define both. Procedures have side-effects, e.g. mutating global state, (pure) functions don't. Functional programmers work with predominantly immutable values and this requires a very different style of programming - one that many people struggle with if they have not had much exposure to it. Functional programming languages also have syntax and abstractions (often taken from maths) that are not used in object oriented programming. Why program with functions and immutable data? It enables better composition.


> Procedures are not functions

https://en.m.wikipedia.org/wiki/Procedural_programming

I guess I just happen to be reading an article that doesn’t make any distinction.

> Functional programming languages also have syntax and abstractions (often taken from maths) that are not used in object oriented programming.

True, but this is nothing related to functional programming an sig.


I agree. The only way to have this language paradigm discussion productively is if we're all looking at two well written implementations of the same project in each.

Even then, I think the final conclusion (most likely "agree to disagree") will be that it's largely subjective, and heavily biased by familiarity.

Ultimately though, it's really just a distraction to avoid the much more controversial discussion we need to be having vis-à-vis tabs vs. spaces.


"You can write bad code in any language" is fairly proven.

Can you write good code in any language, though? At least, good enough? It's quite hard to study objectively, but my subjective opinion is "no". Trivially, brainfuck certainly doesn't cut it, but even if you're only looking at serious languages geared toward general, production-quality programming, there's a lot of programming languages that are much easier to make mistakes in or abuse than others.


> Can you write good code in any language, though?

Yes, absolutely.

> At least, good enough?

I think the better question is, can you write good code in a particular language and still make deadline?


I don't see how anything written in brainfuck could ever be called good code.


That's because you focus on surface syntax.

In the end brainfuck is just a fancy assembler.

You could very well write nicely formatted, structured etc brainfuck. It will just look weird because the (otherwise very simple operations) have a weird sigil syntax.


Come on, that's as objectively false as it gets.

Brainfuck doesn't give you macros, function calls, function structures. It doesn't have names for anything. In other words there is zero syntactic and physical abstraction. The programmer must assemble everything on their own, with extreme repetition, making it extremely hard to see structure, or to change it.

You can't even write efficient code in it since basic arithmetic operations (like addition, indexing) are O(n).


I'd say you can't write good enough code in any language. The most high-profile, extensively tested C networking code still has security vulnerabilities, for example--the best C code isn't good enough. And broadly speaking, I'd say good code meets deadlines, but I guess that's a bit of semantics on my part.


Donald Knuth is quite fond of C. He writes checks to people who find bugs in his code. They are quite valuable, because they are very rare.


Tex was implemented in Pascal and uses web2c to generate C code.

https://en.wikipedia.org/wiki/TeX

Had he implemented it directly in C he would be sending checks every day.


> Had he implemented it directly in C he would be sending checks every day.

He used Pascal maybe not for the reasons you think. Remember it was the early 70s. If you actually look at the TeX source code, I'm pretty sure you won't find the kind of Pascal you envision.

Nowadays, Knuth does use C directly. (FWIW)


so rare, industry defining geniuses can avoid fucking up too much with unquantified effort and no real deadlines or pressure to produce. got it


If you were as productive as Donald Knuth, you would have reason to be very proud.


Not networking code.


> extensively tested C networking code still has security vulnerabilities

Are you familiar with Daniel J Bernstein?

> the best C code isn't good enough

Are you sure _you've_ seen the _best_ code? Are you sure you understood the constraints of those writing that code when you evaluated it?

> I'd say good code meets deadlines

I'm trying to say there's an obvious compromise between the two, which explains the wide spectrum of code quality.


If you have to be DJB or Bellard to write good code in a programming language, that programming language probably has a problem. I used to write a lot of C back in the day, but I have to admit that C++ code is far more reliable and maintainable, which is a damning thing to say about any programming language.


I am somewhat familiar with Daniel J. Bernstein, but you'll have to enlighten me on why you've brought him up.

If you're referencing programs where half the source code is written in a theorem proving language like CoQ, that's not really "written in C", now is it?


> Have you seen bad functional code? It's an unreadable mess of monads, flatmaps and partial composition where you're not really sure how many arguments a curried function is whitholding.

not yet. can you point to any examples?


This post really needs to be the required reading before any PL paradigm discussion.

If only dang could pin it to the top of every post with OOP in the title


It's much easier to write bad OOP and procedural code, TBH.


yes, but it's also much easier to write a good OOP code... learning curve is way steeper for functional programming, OOP is easier to grasp as it's closer to how we usually think of the world, while FP is more math like...


I think it's easier to write OOP code period, without the 'good'. I think writing good OOP code is significantly harder than writing good functional code, because OOP gives users significantly more options to go astray.

The limiting nature of functional languages tends to force people into cleaner solutions at the cost of harder constraints.


The limiting nature of functional languages tends to force people into looking for cleaner problems.


Like massively multithreaded concurrency? Natural language parsing? AI? Deterministic security? Natural language parsing???

I suppose you're right in a sense, these problems are cleaner if you think of them in functional terms. ;)


We also have a lot more of "holier than thou", gatekeeping, guidance that's more subjective than objective but taken as gospel like "SOLID", "clean code", stuff like MVC where your circular system is made to fit a square hole etc in OOP.


I reckon FP wouldn't have as bad a reputation if the Haskell folks didn't insist that <*> and <$> is perfectly acceptable notation.

FP does force you to pay more attention to state though, and losing track of the program state is the root of all evil.


actually they don't. many haskellers would prefer to write

  foo <$> bar <*> baz quux
using idiom brackets as, say,

(| foo bar (baz quux) |)

in any case, the problem is that we want a lightweight syntax for a widespread, general purpose abstraction. the former syntax is used since it's an abstraction developed in terms of the language - so only general names and syntax were available.

obviously they're hostile to people who haven't learnt them yet, but i'm not sure that java's . or = is any better in that respect. these do cause learners difficulty, since they're so foreign to anything they learnt pefore programming.


Okay, but how we think of the world is a pretty poor way to program computers. If you write objects to act how we think of the world, then you mutate state within your objects, and when you add threading it becomes a mess. You can avoid this with immutability, but then you start doing stuff like replacing one Person with another with one property changed, and you're no longer treating Objects like they are in the real world.

A lot of good OOP practices like this are built in and somewhat enforced in FP. For example, the single responsibility principle: which psuedocode implements single responsibility better?

    immutable class Schema:
        Schema(self, ...):
            ...

        bool check_document(self, Document document):
            ...

        Schema with_constraint(self, Constraint new_constraint):
            ...

        Schema without_constraint(self, Constraint old_constraint):
            ...

...or...

    ... -> Schema
    make_schema(...) ...

    Schema -> Document -> bool
    make_schema_checker(schema) ...

    (Schema, Constraint) -> Schema
    add_constraint(schema, constraint) ...

    (Schema, Constraint) -> Schema
    remove_constraint(schema, constraint) ...
Sure, you could separate out `class ConstraintChecker` and `class SchemaBuilder`, but you might not think to do that at the time, it costs an extra file, what's the harm, right? And I really doubt many OO programmers would go to the lengths of separating `class SchemaBuilder` into `class ConstraintAdder` and `class ConstraintRemover`. In fact, that would be so verbose that I'd probably argue that it's not good code. Meanwhile, the same functions in a functional style are each already separated because that's the only option for to do it, and it doesn't come with the verbosity cost. Closures are inherently single-responsibility objects, so much so that you don't even need to specify which method on them you're calling, you just call the object itself and it does the only thing it could possibly do!

I suppose it might be more intuitive to do this:

    (Schema, Document) -> true
    check_document_against_schema(schema, document)
Which would force you to leak the concept of the Schema to places that don't need to touch it, but at least check_document_against_schema still has a single responsibility.


Well, it's not like you have to mutate the state or program with side-effects just because you have some classes... once someone starts grasping on OOP principles, you can teach them the best practices, what to avoid, etc. That said, I definitely think that FP is the right way to go, I'm personally moving in that direction, but I think people should first get their feet wet with OOP, as it's easier to learn, and there's a lot of common knowledge between the two. Once you learn OOP and start to feel the problems and limitations, transition to FP will make more sense IMHO. It will not be just because some dudes on Internet said it's cooler, but you can spot the benefits yourself.


> Well, it's not like you have to mutate the state or program with side-effects just because you have some classes...

That's true, as I said in the post you're responding to. But as I also said in the post you're responding to, then your claim "OOP is easier to grasp as it's closer to how we usually think of the world" ceases to be true: when you give one of your employees a raise, do you replace them with another employee who is identical except for their salary? Obviously not, but that's how you'd give someone a raise if `Employee` is a class of immutable objects. You get to have one or the other:

1. Either OO is similar to how we think of the world and you're writing bad OO code that's got mutation all over the place,

2. OR OO isn't similar to how we think of the world, but you've got immutable objects that play nicely with concurrency.

You can't have both.

> I think people should first get their feet wet with OOP, as it's easier to learn,

"OOP is easier to learn" is a claim I hear a lot, but I've yet to see any evidence that this is the case. Most people's evidence seems to be, "The way I learned it is the easiest." Given that most people start with OOP, it's unsurprising that many people think OOP is easier. There's also more learning materials available for OOP. But if you had completely untrained people learning programming for the first time with equal quality learning materials and instruction, I don't know which group would make progress more quickly, and I don't think anyone does know.

In any case, if the ease of learning OOP comes from being able to start with bad habits (i.e. mutation) that allow you to write code the way you think of the world, I question whether this is actually a good thing.

> and there's a lot of common knowledge between the two.

This I totally agree with you on.


The supreme question really is if you have a really great OOP code, a really great functional code and a really great procedural code, all achieving the same thing, which would be the best? ie. which would be best in front-end? in the back end? in services where you do loads of calculations and analysis? in services where you just need to retrieve data?

It's probably best to be a full-stack developer, or find specialists who are really great at OOP and work on where it properly applies in the architecture system of a project, the same for functional or procedural

We don't need to box ourselves (or a company) in OOP only (or functional or procedural if that's your main paradigm)


> The supreme question really is if you have a really great OOP code, a really great functional code and a really great procedural code, all achieving the same thing, which would be the best?

IME from a maintenance perspective (which is the only place you really find out what is and isn't good code) procedural is nearly always the answer, stupidly simple imperative logic is easy to follow and debug. In the UI you might want some more functional/event driven code but even then the bulk of it should be procedural. Most "great OOP" I've seen isn't all that OOP anyway, it's mostly procedural code with some OOP features used for name spacing.


The difference between them is probably that the OOP was a solution to all problems in its heyday. And a lot of us are probably saddled with the resulting legacy code to this day.

That never quite happened to functional code and procedural code was the only solution to problems at the time.


In my opinion, C++-style OOP was a solution to a particular problem: Here's a procedural program. It's got some data. That data is grouped together for convenience in, say, C-style structs. The problem is that any function in the entire program can modify the data in the structs. When the data goes bad (self-inconsistent, say), that gives you a lot of places to look. Someone wrote a function that doesn't maintain the data in a consistent state, and it could be anywhere. In a 10-million-line code base, that could be a long search.

C++ classes changed that. Nobody gets to modify the data except member functions (presuming no inheritance). Maintaining the consistency of the data is easier, because there's a very limited number of functions to look at to make sure they preserve the invariants.

With inheritance, you can still do that for the parts that you make private.

Now, is that the ultimate answer to how to do software development? Perhaps not. It was a step better than the way things were before, though.


I would much rather struggle to interpret a complex expression (monads, partial application) than struggle to simulate a sequence of mutations (that potentially spans the entire object graph).


> “OOP is prevalent because cheap OOP developers are readily available, while functional programmers are typically more smart, and more expensive…”

The reason I prefer functional programming over OOP is that I'm just not smart enough to understand all the OOP abstractions and how to build reliable systems with it. Functional programming is just easier to grasp mentally, as it takes away a lot of the foot-guns (mutable state, inheritance, etc).


How to write good OO code: write it like it's supposed to be Haskell. Avoid state, favor composiblity, prefer static functions, treat classes like types, throw errors to enforce type safety. All the fancy OOP design patterns and SOLID stuff just enforce or implement the principles which are provided by the semantics of strongly typed functional code. It's easier to be a bad programmer in Python or Java than Haskell, but being a good Haskell or Rust programmer is no more easy or difficult than being an equally good Python or Java coder, IMHO.


I've always been a fan of thinking of systems as strictly-delineated combinations of two components:

• protocol glue (which is OOP/actor modelled)

• business logic (which is best served by being strongly-typed and statically verified.)

I'm a big fan of writing most of your code as libraries and "engines" in a language like Haskell or an ML (or Rust, if your algorithms are best specified on a pointer or RAM-word machine); and then taking those components and hooking them up to the outside world using an actor language like Erlang.

IMHO, this uses each language for its strengths. You've given the argument for why business logic is suited to languages like Haskell, so I'll give the other half:

Languages like Erlang, or Smalltalk, or other "true" OOP languages (i.e. languages where actors control not just their own state, but their own executing code) are good at reacting to arbitrary untyped messages, doing clever things after only partially parsing them, without needing a complete understanding of what they mean. This allows for components like proxies or brokers to react to metadata on RPC requests without needing to constantly be told about the typings of new messages. This also allows rolling upgrades of distributed systems that don't require a preliminary "teach the system that the new message-type exists and it should ignore it" upgrade.


This sounds really interesting - can you point to any articles/talks/projects for more info or examples?

(If not... Hint, hint, nerd-snipe, nerd-snipe?)


Oh, don’t worry, I’ve been beyond nerd-sniped on this; I’m working on a little language-design just to better elucidate the concept. :)

(The goal won’t be to get anyone to use this language; just to get the concept understood in a way where other language maintainers might then choose to bring in a spin on it in their own designs.)


(Well, you know Linux is also just that toy OS that won't be big like GNU or anything... ;))


> “OOP is prevalent because cheap OOP developers are readily available, while functional programmers are typically more smart, and more expensive…”

Yuck-o. That's a physically painful amount of bias.


That was embarrassing to read. I can program decently in Scala and a bit in Elixir and I'm a goddamn moron...


He never said functional programmers weren't morons, he just said they were "more smart."


Even if it’s true (I don’t claim it is) because FP’s brick-wall learning curve is a huge filter - given that FP isn’t widely used in SE (and you don’t need to learn FP to make lots of money doing SE) - the people spending time to learn FP will be doing it for esoteric-appeal or intellectual curiosity - which can be argued is a better indicator of general intelligence - the same way that people who know and write OCaml, COBOL, Ada and Erlang today are probably very smart.


This is basically Paul Graham's argument in "The Python Paradox".

http://www.paulgraham.com/pypar.html


All systems are OOP (i.e. mutable-state state machines, mutated through message-passing according to rules that are an opaque part of their own state) at some level.

Systems that don't describe themselves as OOP just have one "object": the OS process. This object is still there, and still has to pass messages through IPC, sockets, disk IO, etc.

(Or, if you drop the OS altogether and write a unikernel, your "objects" are [real or virtual] machines.)

This isn't an argument in favor of OOP as a be-all end-all paradigm for writing all your code in, mind you; it's just an argument against thinking you can have a language that has no OOP aspects whatsoever. Maybe you can, if that language only models pure, side-effect-free computations! (eBPF could theoretically be non-OOP, I guess?)


All systems are _NOT_ OOP.

State, or the existence of an "object" does signify OOP any more than the existence of a function or a function call signifies FP.


I generally prefer FP style for most things as well, but sometimes FP can be taken too far.

Trying to understand type signatures that are 3 lines long with nested class types/generics and getting your code to fit into some kind of monad transformer pipeline can get confusing very fast.

For problem domains where you have a lot of state that is shared all over the place (like trying to implement a vector editing program, electronic schematic editor/simulator, or game development), I find OOP models easier to work with.

Monads can be used when you have shared context, but then your code gets constrained and a bit unnatural; you have to worry about Kleisli composition, endofunctors, and plumbing things into and out of the state monad. Yuck.

FP is a tool. OOP is a tool. Know their strengths and weaknesses and when to use each. Yes, you can work entirely with a single paradigm but IMO you're handicapping yourself by doing so.


>signatures that are 3 lines long

What hath God wrought


I can read and understand procedural and OOP, but I've never been able to make sense of Lisp or Haskell code.

Like, not even the super simple examples which show you a side-by-side comparison.


Yea, but did you open a book on either language?

I've trudged through a few Lisp books and tutorials and at least part of a Haskell book. You're right that syntax-wise they're very different than Python & Java, but the simple examples aren't hard once you learn more. Granted, I can't write even moderately advanced Haskell code, but quick-sort and some list comprehension code shown in "Learn you a Haskell" is pretty straightforward. "Real World Haskell" is also not too bad (at least the first few chapters). If you're even aware of Lisp & Haskell, you'll be able to pick them up at least at a superficial level with a little effort. Lisp is just weird due to the parentheses and the pre-fix notation, but the principles are very simple and allow you to directly manipulate the syntax tree as there is no difference between code and data. For example the simple lisp expression (+ 2 2) returns 4. What about (* 2 (+ 2 2))? The inner expression returns 4 and that is passed to the multiplication operator with 2: (* 2 (+ 2 2))###### (* 2 (4))###### (* 2 4)###### 8######

Edit: I have no idea how to format things on HN, so I out six "#" to represent a new line.

Btw...David Touretsky's Common Lisp book is extremely high level and approachable. My biggest issue with many lisps is that installing and learning the tooling (Ex: Emacs) is a huge pain. I'd recommend you start with Racket as it is like Python in that it comes with a simple IDE, so you can focus on the language and not the tooling. Racket also has a lot of libraries for a niche language.


  (* 2 (+ 2 2))
  (* 2 (4))
  (* 2 4)
  8


Thank you good soul!


You are welcome. Here's how to format comments: https://news.ycombinator.com/formatdoc


If you're looking to learn more FP, may I recommend SICP? https://github.com/sarabander/sicp If you google around you'll find it highly regarded.

After the first chapter or two you can move away from working in the trenches and work on a more abstract level. The concepts are what make FP shine, and once you know them, you can bring those concepts into many OOP languages to your advantage.


Pretty much every OO language has implemented lambda (anonymous) functions in recent years.

It's actually a very enjoyable coding flow to create closures in stateless objects using C# or Java.


Not only that, Smalltalk already had lambdas (known as blocks).

Smalltalk-80 collections pretty much offer a subset of LINQ.


Closures are just memory leaks though.


A closure ends up being an object with references and can be cleaned up just like any other object. Or so I thought. What am I missing?


I mean, you could say the exact same thing about objects..


Not in Rust.


"Functional programming is just easier to grasp mentally" Disagree. Coming from a OOP background I'm still waiting for that ah ha FP moment.

"as it takes away a lot of the foot-guns (mutable state, inheritance, etc)" Agree. And it is far easier to test functional code!


Try Ullman's wonderful 'Elements of ML Programming'. Standard ML is a pretty good place to start because it's a relatively small language with a pretty straightforward semantics, and Ullman did a really good job of getting students into the mindset.


Yeah, I have also found that even very good OOP programmers will take longer to implement their designs, and they are no less error-prone. Unit testing seems much trickier as well.


If unit testing is added to an OO language in such a way that it alllows you to ignore access qualifiers completely in unit tests (and only there), the whole process becomes a lot less painful.

I am not a programming language expert, but I think that enkisting the help of the compiler for stubbing and mocking could potentially ease a lot of pain.


C# has `InternalsVisibleTo` FWIW - though `internal` isn’t the same as `private` - but if you really want to unit-test private members you can always use reflection.

An OOP purist would tell you to abstract-out your fancy private logic into a DI-injected service that can then be tested separately.


> An OOP purist would tell you to abstract-out your fancy private logic into a DI-injected service that can then be tested separately.

Many OOP purists don't seem to think about ways the language itself could work differently to support them better. All these guys seem to remember is their set of favorite hammers.


Well, consider composition-over-inheritance, traits and mixins - all of which require language support, otherwise you get horrible developer ergonomics issues like having to repeat interface implementations over and over - or build your own domain-level implementation of traits - which just wastes people’s time.


Functional programming is definitely not easier to grasp, but it does make compositionality explicit. So the downside is that a single line of code can be denser and less intuitive. The upside is that you can reason inductively over large bodies of code, treating Desired Properties for large things as following neatly from Desired Properties for small things.


I grasped the concept of OOP pretty easily. Can't say the same about FP.


People say this, and I'm not sure it's actually true. People grasp some basic use of structs with methods on them pretty quickly and using that to do polymorphism. Most programmers I know haven't grasped how to use a system of message passing objects to make the semantics of the underlying language programmable at runtime.


To me OOP is about using "templates" to create "objects", which map nicely to real world entities/concepts. So a class is like a function, but unlike a function it can do more than define an input to output mapping. It groups together multiple related functions and variables, and allows "objects" to exist and be interacted with. These "templates" can be hierarchical and/or have more than one "base template", combining all their "features". This type of thinking feels natural to model real world objects and tasks.

The above is my "layman's" understanding of OOP (I've never taken programming classes, but I have been using OOP extensively in the code I write to do my machine learning research).

Can someone describe FP in a similar manner to someone like me? Because every couple of years I stumble upon another "intro to FP", and it just does not click for me. When exactly should I use it? How "monads" or "functors", or whatever else is important in FP, map to the real world, and real tasks? Maybe I just don't write the type of code that would benefit from it.


> This type of thinking feels natural to model real world objects and tasks.

There is some truth to this, but programs don't store data, they just process it. We already model the data in our data store. If we're accessing data from a DB or receiving it in a defined structure like JSON or XML, the data has (or at least should have been) modeled and classified already. What's the advantage of doing it again? The other issue I have with this core OOP concept of modeling things in human terms is while humans think like this, computers don't. It's just a machine. OOP just seems to create complex abstractions that don't really correlate to how a computer actually works.

That being said, I don't think the purist approach to FP is a magic solution to our coding woes either. The pure FP style definitely has it's uses but I would never recommend that someone build something like a web app in Haskell or Erlang.

Personally I favor a hybrid style. Just use what works and makes sense for the task. Don't restrict yourself to some arbitrary set of self-imposed "rules" that define how you code. Any style of programming is equally capable of producing "bad" code, OOP and FP included.

If my task is to write a program that adds 2 numbers together the correctness of my coding style in relation to my chosen paradigm is irrelevant if the answer to "2 + 2" isn't "4".


I feel the same way. As hobbies I like to work with my hands on things like carpentry and metalworking. OOP maps pretty well to how I think about accomplishing step by step tasks on physical objects:

Step 1: Use this tool (instance of a class) to perform that modification (call a mutating method) to affect this workpiece (another class instance). Step 2: Now use that tool...

So if I’m building a cabinet, my mind plans out something like:

  saw.cut(wood1);
  saw.cut(wood2);
  wood3 = glue_bottle.apply(wood1,wood2);
  sander.sand(wood3);
Etc.

I’ve always thought of a program as just a more elaborate series of step by step commands. Computer do this to this thing, now do that to that thing (which may, through a subroutine, mean doing other nested tasks). I can’t wrap my mind around accomplishing a step by step process with functional programming. Just haven’t made that mental leap yet, probably because I’m so used to thinking the other way.


> I’ve always thought of a program as just a more elaborate series of step by step commands...Just haven’t made that mental leap yet, probably because I’m so used to thinking the other way.

Yes, it's a hard mental leap the first few times. And there's another leap to logic programming that is almost as large as the jump to functional programming.

Perhaps a physics analogy would be useful.

A physicist may use a local, kinematic view: the object is here and has this velocity, and this force on it, so in a fraction of an instant that will make it move this much, and in another fraction of an instant it will move this much more...

A physicist may also use a global view: if I consider the set of all possible paths that start at point A and end at point B, what distinguishes the one an object will actually take?

Both are useful, and things that are obvious in one are usually quite obscure in the other.


For a simple case like this with no diamond-shaped dependencies, the direct functional expression is inside-out rather than top-to-bottom:

  sand(
    sander, 
    glue(bottle, 
      cut(saw, wood1), 
      cut(saw, wood2)))


Now write that code with tools having durability. In OOP that is trivial and require no code changes elsewhere.


> To me OOP is about using "templates" to create "objects", which map nicely to real world entities/concepts.

That's a really commonly held view, and I think it goes off the rails in several fundamental ways:

1. The templates as a separate language feature are unnecessary, a vestigial inheritance from previous languages. One of the things I really like about JavaScript is that it took Self's object system, so I don't need to have a class to get an object. Sometimes it's useful to have a function to generate a family of similar objects. Sometimes it's not.

2. Objects (values in the language) mapping to real world entities/concepts is a dead end. This was actually realized by the database community by 1970, with the relational model. In a relational database, you define relations which may be aspects of an entity, but except in simple cases the entity exists only as a smeared out set of aspects across tables. And further, there need not be an entity that all those aspects describe, or there may be a complex web of entities and different sets of aspects may pick out contradictory, overlapping members of that set. I have found the terminology of phenomenology (suggested reading: Sokolowski, 'Introduction to Phenomenology') as a really good vocabulary for thinking about this.

3. The original motivation for OOP was very different. Consider that you're exploring an idea for a new programming paradigm, approach to AI, something that expresses itself in a mathematical formalism. One of the bottlenecks of CS and AI research was implementing these formalisms as programming languages. OOP came from the observation that you can express these formalisms in terms of message passing actors, and message passing actors are a straightforward thing to implement. So if you have a programming system that's built on message passing actors and you want to try a new formalism, you can add a new arrangement of message passing actors to the same, live system to implement your formalism, which is a lot faster than writing a whole new compiler or interpreter.

> Can someone describe FP in a similar manner to someone like me?

The impetus for FP comes from Backus's Turing lecture "Can programming be liberated from the von Neumann style?" Backus's observation was that thinking programming in terms of "do A, then do B, then do C" doesn't scale to intellectual complexity very well, and pointed out that the obvious way to scale is to have some kind of "combining forms" that let you combine hunks of programs, and that a somewhat restricted form of function is particularly easy to combine.

The simplest version that we all know is a shell pipeline. This assumes that each command is a function from string to string and uses a function | to compose functions. We take it for granted that each shell command can't mess with or look at the inner workings of the other ones in the pipeline (a restricted form of function that is easy to combine).

In bash you would write

    ls . | grep pig | wc -l
In Haskell you would write

   (getDirectoryContents .) . (filter (=="pig")) . lines . length
The next step is map, filter, and fold. It turns out that fold is as powerful as loops, so those three together are a complete set of programming primitives, and they're super easy to combine.

And then you start climbing up the ladder of abstraction, looking for similarly easy sets of primitives that you can express stuff in and combine them. Monads, applicative functors, arrows, lenses...these are all structures that people have found while building such sets of primitives. There's nothing magical about them from that point of view. It's kind of like the fact that sequences (lists, arrays) show up everywhere in procedural programming.


I appreciate your answer.

programming in terms of "do A, then do B, then do C" doesn't scale to intellectual complexity very well

I think that's the bit I'm struggling to understand. You gave an example of shell pipe, but that's easy to implement with simple procedural programming:

  f = ls(.)
  g = grep(f, "pig") 
  h = wc(g, mode="lines")
  print(h)
This code is easy to understand, and the advantage of doing it in Haskell is not clear. I guess it's not "intellectually complex" enough to demonstrate the power of FP. Is there some example where I'd study it for a bit, then realize "Yes. This is nifty way to do X", where X is some task that is cumbersome (or dangerous, or ...) if done in procedural or OOP style of programming. Because I definitely had a moment like that when I was looking at OOP examples.


That's a really clear statement of what you're after, and I'll try to provide a good answer.

When you write a procedural program, there are larger patterns or strategies that you use again and again, such as the simple pipeline you show above. Or you repeat until a condition is met, or you iteratively adjust a value. Any competent procedural programmer deploys these strategies with barely a thought. They aren't things in the language that can be manipulated and combined. And, truthfully, a lot of this has been dragged into what we think of as procedural languages today.

So consider a procedural implementation of listening on a socket and on each connection reading the request and sending a response. It's straightforward to write in a procedural language, but if we want to reuse that behavior, we need to parameterize it over the handler. That may seem obvious today, and it's a trick that C programmers have known forever (see qsort in the stdlib). But imagine setting up the whole language like that, making all the strategies that we toss out so blithely in procedural code, into higher order functions.

This pays off because you can debug and perfect the strategy and the particular parameters for behavior separately, which turns out to be a really useful separation. Get the socket listener code working where the response is echo, then plug in your web app's handler.


> Most programmers I know haven't grasped how to use a system of message passing objects to make the semantics of the underlying language programmable at runtime.

Can you please elaborate on this?

By “message passing” are you referring to ObjectiveC-style virtual calls - or to message-passing in the context of OOP loose-coupling for disseminating events (i.e. broadcasting) instead of using strong references to call event-handlers?


Objective-C's virtual calls are exactly the message passing I mean. They're a direct copy of Smalltalk's.


You could frame OOP as a single concept. OOP is making an abstraction that within that abstraction can store mutable state. Though, that doesn't really hit at all OOP has to offer.

FP, on the other hand, is more like a set of tools. You can use many of those tools in many OOP languages. In this way FP is a lot like procedural programming paradigm, as it too is a set of tools.

And that is why

>I grasped the concept of OOP pretty easily. Can't say the same about FP.

is a no. Not because FP is difficult, but because it isn't a singular object.


Ok. Can you give me an example of any FP objects/tools which are useful? By useful I mean it would make my life (as a coder of short research oriented programs) easier. Similar to how learning OOP made it easier to keep track of all the moving parts in my code?


One example is streaming. Streams allow for a near infinite array size to be processed, because it works on one element at a time as it comes in. Streams come from the FP world.

From that function chaining originates from the FP world and is used in modern languages like Python quite heavily.

You mentioned research, so a valid example of this is the difference between Spark and MapReduce. Spark was introduced to optimize processing large sets of data, by streaming it. MapReduce does not.

For a more heavy example, a few data structures that are quite powerful come from the FP world. For example, this cppcon talk is introduces one, which gives near O(1) time for all uses (insert, delete, lookup, ...), and can be used in a thread safe way: https://youtu.be/sPhpelUfu8Q


Because the OOP concept doesn't map to how a computer actually operates on data, and FP does much more. It's certainly much easier in a Functional language to optimize for the CPU cache and to support concurrency and parallelism.

OOP was designed for humans, and not computers; of course it is going to be easier.


I strongly disagree. How a computer actually operates on data is the CPU's opcodes, which: 1) have no notion of what a function is, 2) are quite happy to deal with global variables, 3) are perfectly willing to mutate data anywhere, 4) are willing to branch to arbitrary locations.

That's about as far from FP as you can get.


And OOP is closer?

Are you saying CPUs are object oriented?

Or maybe you just latched onto the comment about FP and decided to argue against it in isolation.

I am saying that CPUs are NOT object oriented and that functional programming is closer to how a CPU operates than OOP is, and therefore much easier for a compiler (and a compiler author) to reason about and to optimize.


> And OOP is closer?

No.

> Are you saying CPUs are object oriented?

No.

> Or maybe you just latched onto the comment about FP and decided to argue against it in isolation.

Yes, because it was flat-out wrong. FP is not in any way closer to how a CPU operates. (I mean, the rest of your post had problems, too, but that one bit was glaringly wrong.)

> I am saying that CPUs are NOT object oriented

That I will agree with.


Question:

Which functional programming languages are you specifically referring to with that statement?


All programming languages were designed for humans and not computers, outside of assembly and machine code, and some higher level languages that are tightly coupled with a specific machine architecture.

Computers are inherently procedural. Their code is inherently "unstructured". Structured Programming was developed for humans, for the maintenance and communication of programs by and to humans. And that's just basic procedural programming. Functional and OO programming are even more divorced from the underlying machine. And that's not a problem, that's a good thing for most programs that need to be written.

EDIT: Actually, even assembly was designed for humans. It's just that there's usually a 1-to-1 mapping between an assembly language and the underlying machine code.


Even machine code these days is pretty divorced from the corresponding hardware ops. A modern processor usually has to look up what to do when it receives a machine code instruction.


I dont think anyone is smart enough to grasp it all. Especially the hordes of enterprise java developers who create it.


> Functional programming is just easier to grasp mentally, as it takes away a lot of the foot-guns (mutable state, inheritance, etc).

I guess Lisp and Scheme derived languages aren't functional then.


Correct. Lisp was never a functional language.


So when I learned Lisp and ML, FP was impossible it seems.

Only through the glasses of "Haskell, the only true way".


That's just bragging in disguise. "Look, I'm so smart, I can't understand how you make it so complicated".


I always say that it's pretty hard to write REALLY bad functional code. The same cannot be said of OOP.


It's pretty easy to write dreadfully unmaintainable lisp.


I wouldn't call Lisp a functional language. Historically it's not functional, and in practice it usually isn't functional either.

When I hear "functional", I usually interpret it to mean languages influenced or derived from the ML family of languages. Notable features of these languages include algebraic data types, match expressions, emphasis on monadic operations (fold, scan, map, etc), support for TCO, and a expressive static type system (though unfortunately not necessarily supported higher kinds), and a discouragement of mutations.

Common Lisp doesn't really emphasize any of those things, and Scheme only a couple.


That is describing Miranda and Haskell.

F#, Standard ML, Caml Light and Objective Caml would fail that bullet point list.


What points do ML languages fail on? They have everything I listed.


TCO (you need explicitly rec annotations), immutability (ref cells, array types), existence of imperative control structures.


Just because you need a red annotation doesn’t mean they don’t have TCO. Most people consider Scala to have TCO, and you need rec annotations as well. Also I said mutation is discouraged for ML languages, not that it hard. Point taken about the imperative control structures, though.


Right, the generated machine code also matters.

Scala has partial TCO, surely not cross recursive calls like Scheme language standard requires and how it is defined from CS point of view.

Same applies to F#, because neither JVM nor CLR have direct support for TCO.


Lisp, while somewhat functional.in preferred style, is an impure functional language and, as such, supports, fairly easily, imperative code with mutable state, with all the pitfalls that involves.

It's also usually untyped, and I think the hard to write bad functional code line is mostly true of statically typed pure functional code.


Well, I wouldn't exactly say that Lisp (CL) is functional. Also, if you're referring to macros then that's fair.


Is that mainly due to macros or something else? Macros is a whole other paradigm...


It depends on what your goals are. It's really easy to write slow FP code, for example.


I’ve come to loathe inheritance in OOP, favoring composition instead.

The idea of creating abstract base classes makes sense in theory, but this eventually leads to problems regarding inherited behavior (i.e. debugging unexpected behavior of a “duck typed” interface up the inheritance chain)


I disagree. OOP and inheritance is fine, and so is just about every feature/paradigm/language/anything in programming, when used thoughtfully by an experienced developer. For instance, inheritance can often be used nicely at a framework or engine level.

Most people are just inexperienced and bad at programming, and bad at managing large projects, and if a lot of bad programmers and managers happen to be using some paradigm or feature or language, it is often the paradigm that takes the blame for their shoddy work because nobody is willing to admit that they are the ones at fault.

Functional programming feels nice to everyone right now, because lots of experienced developers are enjoying themselves with it, and writing decent code. But mark my words: that good feeling will end once it gets too popular and adopted by new developers. And then people will blame Functional Programming for all the awful work people do in it. If you think I'm wrong, you are way underestimating how bad new software developers can be.

We don't throw away all our hammers just because some bad construction workers use hammers on the wrong things.


I wish I could talk to you 5 or 10 years down the road when you have discovered and internalized the reality that a) there are no good programmers b) the programmers you're complaining about will never go away c) you're one of them

My contention against OOP is that it is a pattern that encourages critical mistakes that composition-based systems does not encourage.

My contention against FP is that it discourages small amounts of state (e.g. that doesn't leave the scope of the current function) and instead encourages many mental leaps to wrap your head around the transducer or whatever.

But if I had to pick one, FP tends to produce higher quality code.


I've been programming for 20+ years now, nearly fulltime, and worked with many other programmers for the entire time. There are plenty of good programmers, and they all have a lot of experience. I've learned to tell the difference. It took a long time. Your blanket statement that developers are all bad is simply wrong, there are vast differences between them. Maybe to put it in terms you seem to prefer: some are "less bad" than others. And not everyone is bad in the same way, either.

But you're right, the programmers I'm complaining about will never go away, because there will always be new programmers, and old ones will retire or die.

My only point was that everyone's impression that FP tends to produce higher quality code is largely based on a selection bias from the code you've read, or based on your own attempt to learn the paradigm after already having had some amount of experience.

I witnessed firsthand the rise and fall of many paradigms in programming. I've witnessed it with NoSQL. And Agile. The same thing is likely to happen to Functional Programming, if it becomes too popular too quickly. When you see companies starting to post jobs with "Functional Programming experience required", that's when you're reaching peak, and a bunch of new (bad) developers will come in and ruin everyone's impression of the paradigm.


The industry is very new. The rise and fall is natural (lolwat data mining?). But the function is not uniform. MongoDB might be one of the worst tools on the planet for relational data, but data that's only accessed by id and is dominated by variable length data? There's no better db on the market. Agile? Well, nobody's figured any of that stuff out yet, to say the least. We can't even estimate basic tasks still, so planning is strictly fucked until we can.

Additionally, there are so many things to learn, especially for up and coming developers that it's nearly impossible to select good tooling to learn in the first place. So you'll have them randomly spread across the tooling which, again, is indistinguishable from each other from the perspective of a new developer.

I guess my point is that you're complecting two orthogonal concerns: the process of figuring out this software thing (still in it's infancy), and the process of bringing up juniors. Consequently, what juniors do before they gain sufficient training shouldn't have much impact on the value of tooling. That is, unless the tooling in question causes the juniors to make unnecessary critical mistakes because the tooling encourages it. QED I suppose.


By your logic, why do we bother with a guard on a tablesaw? The only people who chop their fingers off are inexperienced or bad at sawing things, they shouldn't be using it in the first place. Experienced carpenters never have a lapse in attention or judgement and will never cut their fingers.


The metaphor would work better if professionals (or even any significant number of amateurs?) actually used a guard on a tablesaw.


That's sort of the essence of the dispute, though. Inappropriate promiscuous subclassing is a bad idea, but really just a fairly small part of what people tend to call "OOP", most of which is pretty unobjectionable and good: inherent modularity, data-dependent name scoping, unification of typesafety with behavior/interface... no one complains about that stuff, and it's all part of what we picked up in the OO revolution of the 90's too.

So we end up in endless strawman arguments about "OO is bad! You should be programming in Flurf instead!" when, no, what makes Flurf special is just its choice of how to eliminate implementation inheritance. You can write great code in Java or whatever too, even while rejecting inheritance promiscuity.


The good parts listed as OOP don't seem exclusive to it though. They fall more under just plain, old, good program design regardless of the paradigm. But subclassing, while occasionally useful, does.


Isn't it sort of a truism that any good idea seems like a "plain, old, good" idea once it's become a consensus thing? I mean, sure, other modified paradigms have picked that stuff up, and the features aren't "unique" to OO. But they were certainly popularized by OO, and the languages we talk about as "OO" languages all implement them in mostly the same ways and for the same reasons.

So basically this is the no-true-scotsman way to perpetuate the strawman, I guess.


I'd say one property of good ideas is that they are able to survive much longer than bad, so, yes, they have a higher probability of reaching a general consensus. Bad ideas can also reach consensus. Consensus is not always an indicator of quality, and I don't believe good ideas are destined to reach consensus.

I can't really argue with the historical impact of OO, because it's obviously had a ton.

OO is always so loosely defined that a no-true-scotsman argument is almost inevitable, but I'd certainly say a subclassing model is one of the defining features of the OO languages we talk about.


Reflecting on C++ projects that I've worked over the years, a lot of inheritance (especially inheritance of implementation) was unnecessary or made the code harder to understand. These issues with inheritance are reflected in the designs of new school language like Rust and Go.


OOP says pretty little on inheritance over composition, but I do agree that inheritance became the focus of a ton of OOP/Java/C++ curriculum and I think we're all feeling some backlash from that.


If we are talking about SOLID the S for SRP implies that we should favor composition of inheritance. Composition makes testing easier and reduces side effects. Not to mention delegation, aggregation, etc. In es/ts if you build small single purpose lambdas you can use things like pipe or compose to chain methods with a single unary input. Like this:

  private _getJiraDataWithPipe(data: CircuitViewModel[]): string {
  const pipe = (...fns) => x => fns.reduce((v, f) => f(v), x);
  return pipe(
   this._getCsvData,
   this._splitRows,
   this._removeDoubleQoute,
   this._formatLines,
   this._joinRows,
   this._setHeaderToBody,
  )(data);
one input 'data' passes down the pipe from left to right transforming the object as is passes down. As each of the methods called in the chain are lambdas with strict typing there are no side effects.


Inheritance breaks encapsulation. When you break encapsulation your code becomes difficult to reason about. The OO code I've observed tends to have an abundance of global state in the superclasses.


Implementation inheritance does more than break encapsulation. It creates inherent fragility because of how it makes code defined in a base class dependent on behavior that may be overridden willy-nilly in a derived class; the possibility of overriding means that the whole bundle of public and protected methods/fields is essentially part of an object's external interface, due to its open extensibility - with method implementations typically calling through that same interface. This is far worse than what you would get with a plain old non-OOP program, even one that doesn't use encapsulation at all! In most cases, implementation inheritance can be treated as simply a bad idea that should be avoided altogether - composition is clearly the better approach.


The problem with the "abstract" keyword is it's more normative than descriptive. If there's not an actual useful abstraction involved, and/or if the abstract class doesn't actually represent a more abstracted generalization of a mental category, it's just code re-use.


I've come to the conclusion that in most languages we refer to as object oriented today, we probably shouldn't use inheritance at all. In Smalltalk it was a way to make generic behavior available to all objects. The prototype chain in Self was the for the same reason.

If you're writing a language that doesn't expose that machinery (which lets you reprogram the runtime), I've found that inheritance is usually a poor choice. Yes, Python programmers, I'm saying that you should never use inheritance unless you're doing metaclass programming.


Agreed. I love OOP--C# in particular is my favorite language--but I always advise more junior folks to avoid inheritance in favor of composition and DIP. I do wish languages like C# would make it easier to implement the "delegating member" pattern so I don't have to rely on ReSharper to generate all the boilerplate, but it's a small price to pay to avoid the pitfalls of inheritance.


Did you try F#? If so, what's your opinion about it?


I haven't, no. If anything I've been trending more OOP by starting to write C++. I just haven't had the opportunity to switch gears that much.


Well that's not really a critique of OO. A component is an object, no?

Inheritance is often overused but its obviously important and useful. When it works it works seamlessly. You don't have to think about all the inheritance in collection libraries, for example.


You can write good collection libraries entirely without inheritance, just use interfaces instead.


But you'd still want inheritance in your interfaces even if you didn't rely on base classes that much. IList is an ICollection and an IEnumerable in C# for example.


That's because the functional way is the correct way.


There is more than one "functional way".


They are referring to function composition, that's the point...


It is like JavaScript: I don't hate JS itself, I hate spaghetti-code people write with it (but I do hate automatic type casting).

I don't hate C, I hate dangling pointers and buffer overruns.

I don't hate OOP, I hate over-engineered class hierarchies, state hiding and leaky abstractions.

I don't hate FP, I hate when people write a new undocumented DSL for each project just because "it is so easy in an expressive language like X".


I've been programming for a quarter of a century.

It has taken me a long time to realize that "dumb code" is the best.

Dumb code is easy to read.

Dumb code is concise and to the point, not attempting to address theoretical problems (necessarily).

Dumb code does not overthink the problem.

Dumb code helps you sympathize with the machine.

Dumb code has no weird blackbox behavior.

etc...


This, exactly.

Brian Kernighan (of K&R fame): Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?


This is exactly why I love Go so much. The whole language is just dumb and simple, you need to REALLY work hard to make complex shit with it.

Doing things the dumb and easy (and maybe a bit verbose) way is the path of least resistance.

Compare that with C++ with its templates and Javascript with its ... everything and even Rust. They invite you to make the most complex monstrosity you can manage with all of the fancy stuff at your disposal.


> The whole language is just dumb and simple, you need to REALLY work hard to make complex shit with it.

On the contrary, the resulting code bases are more verbose and harder to understand compared to more expressive languages. And by the latter, I'm not even talking about things like Scala or F#. Java and C# are strictly more expressive and have stronger modeling power compared to golang. Reality is complex, you don't want a dumb language to model it because it will translate to complexity in the code base.


Here's some expressive code. If you aren't familiar with Ruby, how long would it take for you to decipher what that does?

  def sum_eq_n?(arr, n)
    return true if arr.empty? && n == 0
    arr.product(arr).reject { |a,b| a == b }.any? { |a,b| a + b == n }
  end
The exact same code in Go would span multiple lines, but would most likely only be constructed of a few basic programming idioms that every programmer with a few months of experience can identify.

Code is read 10x more often than written, it should be easy to read and skim through. "Expressiveness" of code is a quick way to get incomprehensible crap that works, but no-one but the creator can decipher.


> If you aren't familiar with Ruby, how long would it take for you to decipher what that does?

Why would I not be familiar with the language of the code base I'm working with? This is a fallacy.

Secondly, I don't use Ruby, and from what I can tell, it's finding if any two non-equal elements in a given array sum up to a given number n.

I worked on several golang projects. It gets tedious and messy very quickly seeing code bases littered with one-use functions with what basically amount to map/filter calls being expanded into multiple line for loops. Not to mention it makes exploratory debugging much more tedious.


A corollary to that-- less code written the better.


Not always. eg, simpler code is sometimes turning a line into two or more lines, just to make variables named in a way that describes what's going on.


It's not just less, it's being as straight forward and simple while avoiding any temptation to be clever.


I don't think OOP itself is bad. Bad abstractions are bad, no matter the language, and that is the main problem. Abstractions and domain models are created based on our understanding of a given problem as well as what is possible within a certain programming language/paradigm. Most of the time, we initially do not have a good understanding of the domain we are working in, and because we are conditioned to keep code "organized", we tend to fall back to established "patterns" (which OOP has an abundance of...), but these almost always end up being the wrong abstraction. By the time we figure this out though, it's already too late to change it.

A functional language such as Haskell tend to make us think a little bit more about what abstractions we use, because they are such an integral part of the language you can hardly do anything without them. We can "converse" with the compiler to come to a better understanding of how our code should be structured. You usually end up with a better domain model. However, this can also be very limiting. Lack of control of execution and no control of memory layout is a problem in my domain (video games). It's also slower to develop for because you can't take "shortcuts", which can be helpful to get something up and running without having to understand the problem domain.

I still use OOP daily (C++), but I never start out with abstractions when writing new code. I tend to create C-style structs and pure functions. Eventually, I will have a better understanding of how this new module should be organized, and will make an API that is RAII-compliant and whatnot. I think you can easily write clean code in OOP as long as the abstraction makes sense.


Out of all the comments here this one resonated with me by far the most.

You're totally correct: The goal is to find the right abstractions. Bad abstractions keep getting in your way either by leaking too much or by being based on invariants that turn out to be not actually be all that invariant, whereas good abstractions have no need to be touched again later.

For me, the additional layer of "how hard is it to change the abstraction if it turns out to be wrong" is also quite important. In my environment (algorithmic R&D) you never quite know which invariant your next idea will break. And this is where OOP can really bite you - untangling code from a class hierarchy or twisting interfaces to bend to your new ideas is anywhere between infeasible and a complete mess. Composition over inheritance can help save a lot of frustration here. But sometimes interface/abstract base class + a single implementation also provides a very clean abstraction to separate the core model/algorithm from the fiddly details. Just don't fall for deep hierarchies or large interfaces would be my takeaway so far.

Maybe we just need to get better at giving up on abstractions when they fail? I think that's very tough because they tend to still dictate how we think about problems (plus the usual cost of rewriting code). Your approach of delaying the abstraction choice sounds very interesting (somewhat akin to Extreme Programming?), I will try to keep that in mind for the future. In thinking about this I'm mostly afraid of code sort of remaining at this "makeshift"/provisional level and never actually settling into something more organized (due to lack of incentive/time to improve it). I think that can be kept under control, but do you have any suggestions on that front?


> Your approach of delaying the abstraction choice sounds very interesting (somewhat akin to Extreme Programming?), I will try to keep that in mind for the future. In thinking about this I'm mostly afraid of code sort of remaining at this "makeshift"/provisional level and never actually settling into something more organized (due to lack of incentive/time to improve it). I think that can be kept under control, but do you have any suggestions on that front?

Actually, I find that more often than not, code written in this manner is actually easier to maintain, because there are no layers and concepts to untangle - just the raw functionality and data transformations. You don't need to put yourself in the mindset of the person who wrote the code, who might not have had the right concepts in their head at the time. Instead, you just parse the code like a computer - This makes it easier to figure out what the code actually does rather than being distracted by the intent of the author.

The main purpose of abstraction is to intentionally hide implementation details so other people can use your code without requiring a full understanding of all the details. However, when maintaining code, obfuscation is the exact opposite of what you want - You want to fully understand what the code is actually doing. Internally, I forgo most abstractions. For public interfaces, I expose highly specific functions and datatypes rather than "generic" interfaces, and only the bare minimum. Variants should favor duplication over generalization when uncertain. It's up to the consumer of those APIs (which is often myself!) to choose whether or not to implement abstractions on top of those. In my experience, it's either super-obvious when it's needed or unnecessary if unsure - So you can usually push back abstraction up to that point.


Using python, I tend to write much more procedural code than anything else - packages / modules / namespaces are much better for code organization and pure functions that don't hold state are just easier to reason about. Classes are used sparingly - really only when you actually should encapsulate some data or state, but generally they're not needed at all.

I've seen oop used as code organization too many times, and it's always a laughable mess.


Over the years my programming style has changed. I suspect it will continue to change in the future but for now I’m writing my own TypeScript code in a similar style. I tend to use TypeScript interfaces like C structs and write free functions that use them as the first argument. I find that this style works happily with the React hooks api.


Python functions are objects, instances of the function class, as this little snippet shows.

    def afunc():
        pass

    print(afunc.__class__)
    print(dir(afunc))


While technically a function is a class instance, it doesn't act like one in pretty much any common use-case.


I do agree that O-O design was fetishized starting in the 1990s and the results have been terrible. But this is strange to me:

> In fact, his essay ends by touting functional programming as the superior alternative — a competing paradigm in which problems are modeled not with objects but with pure mathematical functions and an emphasis on avoiding changes in state.

O-O is a way of encapsulating abstraction. That's it. It could mean actors, message passing generic functions composition and/or inheritance. There's no reason you couldn't program in a purely functional O-O way.

I build hardware that uses a variety of different kinds of fasteners, and circuits with different kinds of components. I could build all my logic circuits with only resistors and diodes (people used to do so, after all) but the fact that that's no longer a good idea doesn't mean I should try not to use resistors or diodes at all.


Having been around this block (procedural->OOP->FP->...) a couple of times over the past 20 years, I've ended up at place where I always start with no classes and mostly pure functions, but don't shy away from adding small handful of classes when it becomes obvious that they solve a real problem. Sometimes objects really are the cleanest solutions, but they should never be your first solution.


> In non-OOP languages, like JavaScript, functions can exist separately from objects. It was such a relief to no longer have to invent weird concepts (SomethingManager) just to contain the functions.

This is one of the things CLOS (Common Lisp Object System) got right: It's object-oriented but functions are not "owned" by classes; they're separate and first-class and generic and can dispatch on multiple arguments. In CLOS, classes are truly just user-defined types, and classes don't impede your ability to write functional code.


Is he seriously using Javascript as an example of non-OOP language?


Having the idea of objects does not make a language "object oriented."

Compartmentalization and encapsulation are fundamental concepts of objects and those concepts existed long before "OOP" did


That was called modular programming.

SELF, which JavaScript takes its model from, is definitely OOP.


Sounds a lot like R, which I've seen smarter folks say is really quite Lisp-like.


R uses a generic function, multiple dispatch object system modeled on CLOS, so, yes, it's a lot like R. I found CLOS a bit less obscure to work with, but I never really took the time to dig to the bottom of R and tunnel back up.


OOP is the only way invented so far to deal with complex mutable state. Many practical problems have a lot of that.

OS kernels is one example. Right now, Windows task manager says there’re about 100k opened handles in my system. The state of an opened file handle is huge: user mode buffers, kernel mode structures, encryption, compression, effective permissions, file systems B-trees, flash memory state, and more. Without hiding all that state behind an API, the amount of complexity required to write a byte to a file would be overwhelming. On Linux it’s usually object_do_something(struct tObject *), on Windows it’s DoSomethingWithObject(HOBJECT), it’s 100% OOP, where objects are hiding their state behind methods.

GUI is another example. Whether it’s a web page, a rich text document in an editor, or a videogame, the state is very complicated. GUI frameworks don’t compute functions, they respond to events: from user, from OS, and from other parts of the program. All GUI frameworks are OOP, including web frontent ones, e.g. reactjs.org says “Build encapsulated components that manage their own state”, it’s a textbook definition of OOP.


I feel like there are two effects here: Most code you will encounter in the workforce is poorly written or is now used in ways the author didn't intend. Because of it's popularity, most code you will encounter there is also object oriented.

The second effect is, pay and hire-ability is often linked to years of experience in languages and tools for developers who don't have large projects or exits on their resume. That means that you are incentivized to learn new up and coming tools and paradigms, where your years of experience can quickly become competitive vs more established technologies, where people have years of experience on you.

Now, I happen to also prefer functional programming because it seems to more closely match the way I think, and I find it easier to responsibly manage state in that style. But I believe that these two potential causes have a larger effect than my personal preferences. I don't know how to measure this though.


> most code you will encounter is object oriented.

Well - most code you encounter was written in a language that was designed for object oriented programming, like Java or C# (and, really, Python and Javascript). That doesn’t mean it’s actually object oriented or particularly well designed. I’ve been working mostly with Java for the past 20 years and every single Java-based system I’ve inherited has been the same big-ball-of-mud as the others, with almost every function declared static and classes declared because Java doesn’t work if you don’t type “class” somewhere. It’s never reusable, never unit-testable, and never modularized.


Because it is fashionable to hate OOP and many don't understand that neither OOP == Java, nor none of the multi-paradigm languages eschews OOP rather embrace a mix of OOP and FP features.

Just like apparently, any FP language that has come before Haskell stop being FP, for whatever definition of what FP is supposed to be.


I don't think anyone is arguing that, say, Standard ML isn't a functional language, and if anyone did, they would probably be laughed out of the room.


Well, even on this thread FP is being discussed as a set of Haskell features, starting with the idea that all FP languages use immutable data only.

Which would make Standard ML not a FP language. :)


Then I guess we can start laughing at some people, yes?


It seems like so, but I rather prefer the quixotic way that some education is still possible.


> 'Right tool for the right job' 'No silver bullet' 'Bad programmer write bad code'

Those are all good tools to stop thinking/progressing and being comfortable ever after. But no, everyone knows those and they're not the answer to the question. 'No silver bullet' could only be true when accidental complexities are very small compared to core complexities, but it's far from the truth for nowadays industry.

OOP is bad because it's a foot gun. It forces people making strong assumptions by default. It's only a tool modeling static and mid-sized scale problem.

When the assumption is very strong it means behaviors in a class are strongly coupled, its relation to other classes are static and specific. And that's why people feel OOP is more understandable and readable - because OOP toys are very specific, thus readable.

And it's very vulnerable to change. To deal with that, people have to make less assumption, but every technique doing that have serious trade-offs - people have to choose. And those techniques are where all those obscure 'AbstractFactoryBuilder' came from. For the same problem, 10 people tend to have 10 different trade-offs, using different obscure solution, and 8 get burned in the next release.

For example, to modeling the concept Payment, in a mediocre functional programming language, it might be an algebra data type:

  type Payment = FreeCoupon | Cash Amount | CreditCard Number CVV
Every time you want to process a payment, just handle the three possibilities concretely, if there's duplication just extract it. If there are 100 functions operates the Payment type, it's perfectly fine just like Alan Perlis described. The type is perfectly natural - just what it should be. There is a very high chance people yielding the same result independently.

In so-called OOP, this probably be:

  class Payment
    getFoo()
    getBar()
    doBaz()
  class FreeCoupon extends Payment
  class Cash extends Payment
  class CreditCard extends Payment
In order to accept Payment type, you have to bend three sub-classes into the Payment interface. And there probably 100 methods for Payment, and the refactoring result would be totally opinionated, everyone has their own idea but none of them solve it without introducing new problems.


So much bashing on mutability. Yeah, I get it most of the time it’s bad. But there are a whole lot of problem classes which are either unsolvable or painfully slow without mutable data.


Rich Hickey: "By default, an object-oriented program is a graph of mutable objects. That's a very, very difficult thing to think about, debug and keep running." Basically OO is exactly the worst to deal with increasing complexity.


Every programmer should at least get some familiarity with Clojure. It maintains extremely good balance between simplicity, pragmatism and safety.


Done right - a focus on composition, message passing, little to no inheritance - I find OOP a joy to work with.

It’s almost never done that way...


In my anecdotal experience, attempting / allowing "true OO" programming turns quickly into an unreadable, unmaintainable, unreliable ball of mud. After leading a team of ~30 developers in a monorepo for the past 5 years, I have been relatively successful in avoiding complete software entropy by enforcing the following "code commandments":

- Modular, layered codebase (for ex: interfaces -> service layer -> DAO layer)

- Business Logic resides in Service Layer

- Service Layer made up of collection of Transaction Scripts (https://martinfowler.com/eaaCatalog/transactionScript.html)

- Simple, immutable domain model, with distinct set of immutable objects for each layer. Unapologetically, this is an "Anemic Domain Model", but with separation and no re-use between layers. This allows for immutable objects at each layer.

- Services are decoupled and composed around the data / logic, thereby "owning" their own data. Services can communicate with each other to get data, DAOs are never to be shared among other services. This is in a sense, "microservices without the network".

- Testing at every layer, while respecting the "test pyramid" (https://martinfowler.com/articles/practical-test-pyramid.htm...). Bang for buck is important here.

- Last, but certainly not least: Composition Over Inheritance.


> What happened in most of the world starting in the 70s was abstract data types which was really staying with an assignment centered way of thinking about programming and I believe that... in fact when I made this slide C++ was just sort of a spec on the horizon. It's one of those things like MS-DOS that nobody ever took seriously because who would ever fall for a joke like that.

- Alan K

I don't hate on object oriented programming, but when I come across videos like this https://www.youtube.com/watch?v=oKg1hTOQXoY&feature=youtu.be... it is good enough reason for me to question OOP and explore alternatives.

I will say though... in my limited sample size of coworkers, it is more fun to work with people who attack OOP than those who defend it. But the best people to work with are those who don't get emotional about programming ideologies.


OOP's strengths are coupled-but-separately-evolving components. An app, the platform it runs on, plugins it uses, etc. all needing to talk to each other. This forces you to think hard about the interfaces between components, and how those interfaces themselves can evolve. OOP is good at expressing this sort of thing.

If you assume you have all of the code, then interfaces and their evolution don't matter so much. You can just refactor at will. OOP can't bring its strengths to bear.

The web has shifted things to the "all of the code" assumption. Servers are routinely self-contained. The browser's APIs are anemic and so are supplanted by third party JS frameworks like React. These frameworks are version-pinned by each site and bundled together into a single file. And when upgrading, it's just accepted that there will be pain and incompatibilities.


Managing state is hard. OOP expects state. OOP can make things harder than necessary.

Languages with both first class functions and class semantics are nice, because they allow you to choose the right tool for the job.

Inheritance is way overrated as a tool though. Great for the bouncer ball sprite exercise, otherwise a bother.


To me, the pain I've always had with OOP is the amount of things I need to keep in mind at a time when making a change, and also tracking side effects. That mental overhead takes me away from thinking about what my customer really wants with this feature, and slows me down.


> the amount of things I need to keep in mind

Minimizing the number of things you have to keep track of when you make a change was specifically what OOP was designed for... what's the alternative? Procedural/structured programming is _way_ worse. I think FP is better, but it's so hard to do that I've never worked with anybody who hadn't given up on it before I had a chance to really tell.


Out of curiosity, what did you find hard about it?


Functional programming? I haven't honestly had a chance to use it long enough to find it hard: every time I've worked in a group that was even willing to give it a chance, everybody else gave up before I had a chance to make up my mind whether it was harder/easier than OOP. I personally kind of like the functional aspects of Java, Javascript and Python (although I still don't have the first clue what a "Monad" is).

I've been working primarily in Java for the past 20 years, across four different companies, and as far as I can tell, most everybody gives up even on OOP, too: every system I've ever worked on that was written by somebody else was 90% static functions & static data, with a few "dumb object" data types just because that's what you get from JAXB.


> I still don't have the first clue what a "Monad" is

This is the simplest way I have yet found to think about them: http://madhadron.com/posts/2013-12-05-email-about_monads.htm...


I feel like keeping in mind inheritance and what other classes do and look like is harder than remembering what a method does when chaining them together or currying for example.


Ah, ok. That's what Paul Graham says, too: http://paulgraham.com/noop.html. I believe him (and you), but since everybody I've ever worked with gives up on FP before it has a chance to prove itself, I've never done anything large-scale with it.


I think part of the problem is that a ton of the benefits of FP happen in the "middle" of a lot of programs. If you need to read files, parse something, process flags, or write any output... It doesn't really make a world of difference.


When I lived in Solana Beach California, one of my two tech guru neighbors used to (25 years ago) pontificate on how OOP was the ‘last programming paradigm.’ He thought there would never be anything better. It was the only thing, ever, that I thought he was wrong about. (My other tech neighbor was Stash Lev. He sold the 68030 replacement boards, with good gloating point performance, for the original Macintosh, a wonderful thing indeed! Apparently Stash was never wrong about anything.)

I don’t hate OOP, I just dislike it. I currently use three programming languages, Common Lisp, Haskell, and Python. I don’t generally use Python classes or CLOS unless I am working with someone else’s code.

Purely functional code, when possible, and also immutable variables avoid many problems.


An OOP approach is fairly straightforward to understand when you've got state and need to control how it's modified.

A functional approach is fairly straightforward to understand when you need to describe how state (maybe yours, maybe someone else's) is modified.

A place for everything.


As a person heavily invested in OOP, is there any good reference (book preferably) about Functional Programming?

I know there must be lots of resources, but finding a good one can be difficult when you are new to a field.


Programming Elixir is a good one - https://pragprog.com/book/elixir/programming-elixir.

It's (obviously) about the Elixir programming language, but was a pretty good introduction to functional programming for me, as well. As was working through problems on https://exercism.io/ in Elixir and Clojure.


This was my introduction to functional programming. The nice things about Elixir for learning functional programming is that it's not nearly as complicated as haskell, feels more familiar, and dynamically typed.

I'd recommend this book as it teaches how to think about solving things by writing recursive functions (Elixir/erlang has tail call optimization), pattern matching, etc. No need to learn the OTP concepts (genserver, supervisors, etc).


Something funny I’ve heard about Erlang, and so I assume Elixir is the same, is that the actor model is the perfect interpretation of OO, where even the messages themselves (often though of as methods in OO) are objects, too!

So, perhaps in learning that, you’re learning a more perfect OO.


I've always liked Learn you a Haskell, but it's a trip to get through and you may need to re-read it a few times for things to stick.

Another helpful one is Functional Javascript. It showed me a new, better, way to write my JS code.

I found it can be easier on your mind when you pick it up almost like vim. First you learn how to move around with hjkl (perhaps a functional equivalent is using .map() and .filter() instead of for loops), then you navigate by lexical concepts with w, e, $, %, etc (maybe using .zip()), then build macros with @ (currying), etc. Each step you learn and use regularly helps when stretching your brain to pick up more. You definitely don't need to rewrite everything in a functional way to take advantage of it.


https://pragprog.com/book/swdddf/domain-modeling-made-functi... is a great book to get started with. Unlike most books on FP it doesn't start with an academic or mathematical approach but rather focuses on how to use FP to make solving every day 'business' programming easier and less error prone.


To be honest, I didn't really get the allure until I read "Programming Phoenix." Which isn't really a Functional Programming book. But, it does make clear some of the advantages of functional programming when working in the request/response cycle of your average web app. Picking up the other more abstract concepts had clear value to me after going through it, and so was much easier.


I suggested this somewhere else, but Ullman's old book 'Elements of ML Programming' is still a truly excellent introduction, and Standard ML is a much less exotic language than Haskell for someone coming from the OOP world.


The further we abstract ourselves from understanding of the platforms our code runs on, the worse off we are going to be.

Our job as developers is to write software that transforms data from one form to another. Any code that does not go directly to solving the transformation problem you have for the data you have is wasted time and wasted effort today, and will cost you additional time in performance and maintenance so long as that software is in service. The best code, the most big-free code, the most stable code, is the code that doesn't exist because it didn't need to be written in the first place. Browse an Enterprise Java application and tell me that there is no code there that couldn't have been written much more succinctly and run much faster in another language.

Software development is not an art. It is a science - everything about programming and software execution can be measured and therefore measurable improvements can be observed. Almost no one does this -- why? Because of the "CPUs are cheap" and "RAM is cheap" excuses I have heard throughout my entire career.

I have never even heard of a study of OOP that found OOP to be a superior methodology in any meaningful way whatsoever, and I don't think I ever will. I don't think that result could ever be observed without a very strong bias in the study.


Just to tackle your first two paragraphs: I hate to say this, but this impression may just be a lack of experience on your part. It's not wrong exactly, just a very narrow view of the range of work that gets done with software.

To bastardize a better quote: All abstractions are leaky some are useful.

Even assembly is an abstraction. And if you're working on tuning to nanos maybe that's the problematic abstraction. But I'd say enough worthwhile work gets done on top that it is a successful abstraction.

I think that's fair to claim all the way up your stack. An OS is worth it, even if you do have to fight it once in a while. Same with processes. Or the JVM. Or even gasp OOP classes.

They don't need to be sacred to be valuable.


Ok, so add as many abstractions as possible?

I didn't say they were bad, I said we add far too many.


I have tried out functional programming various times over the years. Over one span of some months I used OCaml to create something interesting. Maybe an OWL parser. That convinced me that something like ML was far superior for parsers.

And I have experimented a few other times, such as with LiveScript.

The last time I tried to stick with a functional design was maybe a year or two ago. I ended up repeatedly passing around a lot of parameters so that many functions had several parameters. For example, in order to make a decision, I had to analyze something in several ways, collect that analysis, and make a decision based on all of the variables in the end. So the parameters just started building up until the last function had like a dozen of them.

Then I grouped some of them into structures, but the functions that operated on those structures had a few names parameters which were named in the decision function. So even though I had fewer inputs to the decision, the function body was complex with numerous parameters.

This led me to use classes to encapsulate the state so that parameters did not always need to be passed at the end since they could refer to internal state.


The issue is that we don't need OOP anymore--people forget that OOP had its heyday when memory and performance were much smaller.

Passing a big list around, doing something to it, and passing back the entire list is no big deal today.

It's a much bigger deal if you have to do some weird decompression/insert because your main memory isn't large enough to hold the whole thing uncompressed. You need a "gatekeeper" to the single thing--and you may need coordination between the "gatekeepers" so they don't decompress simultaneously.

Or, you build a huge hierarchy of objects into a GUI because you don't have the memory to store the rendering state of each one so you want to be able to look it up from it's parent.

Oh, and you only have one thread.

In an era of gigantic memory and dozens of threads, the "advantages" of OOP simply don't make sense.


This argument does not make sense.

The more data you have, the more structure you need to help organize it.


I'm not sure what point the first comparison example in the original article by Ilya Suzdalnitski is trying to make. If you're going to cherry-pick an example that demonstrates complex and abstract OOP paradigms ( factory/repository patterns, serialisation, etc ) then the author should demonstrate how to address the same requirements that these patterns address in FP. It's also quite rich that they're using Javascript to illustrate their point. Javascript does not require the same OOP 'cruft' that languages like C# and Java might require in some situations. So crafting the most complex OOP-esque spaghetti you can think of in a multi-paradigm language is a disingenuous comparison at best.


The problem with OOP isn't the concept itself. IMHO, the problem seems to stem the fact that people use a "Class" as both a means of organizing code and as a state machine. When you confuse the two, problems just get exponentially worse.


Totally agree.

The case for organising code should be named 'module' instead of a class.

It's very nicely visible in Python where class and module are the same thing - a dict essentially.


That is the concept of OOP - that you package together the data (state) and the functions that can operate on that data.


Yes, but it does not make sense to take this mantra to religious extremes. As soon as you have different objects of different classes involved in the same state transition, you have a problem: where do you put the method that orchestrates everything? Usully, neither of the objects involved in the transition are the right place, but people shove it in there anyway.


You don't need to modify that data. The methods can produce other objects in which case OOP starts to look very functional, the main difference being that you lead with the data (a.to_foo() instead of food(a)). There's a lot of different definitions of OOP and some of that is willy nilly state modification everywhere, but other parts of it are encouraging you to see what data is coupled together and to treat that as a unit, therefore helping you think at a higher level about what you're doing.


Users of multiple dispatch OOP languages (Common Lisp, Dylan, CLOS, C++ STL) would disagree vehemently with this characterization.


Well, I'm a C++ STL user, and I agree with me...

Could you be more specific about 1) in what sense you call C++ STL multiple dispatch, and 2) why you think it disagrees with what I said?


If you bundle functions and data, you end up implementing M functions each on N types, or M*N. If you can separate your functions and your data, you implement M functions and N types, or M+N. This is an explicit design goal of the STL and Stepanov's research that led to it.

Similarly, Stepanov was explicitly trying to be able to write a polymorphic max function `T max(T a, T b)`. That is, the instance of the max function actually used is chosen based on all the arguments, not solely on the first, which is the definition of multiple dispatch.

C++ also has a single dispatch object system, and the STL even mixes them, putting things like vector::push on the class with the single dispatch system and the STL algorithms in the multiple dispatch system.


For example via template meta-programming, or free functions with overloading.


Agreed. What I'm saying is that I think you avoid a host of problems if you distinguish between data and state.


> Why Are So Many Developers Hating on Object-Oriented Programming?

Fetishism, pure and simple.

Every language provides some apparatus to help organize programs. Some have more, some less. With less, you have to build your own, because everybody needs help keeping things organized.

Bad code fails to use what is there, or misuses it and makes things worse. There is lots of bad code, because too few love simplicity, so everything gets in the way of everything else.

Blaming OO should remind us of the old adage, "A poor craftsman blames his tools".

People hating OO seem to be Lispers bitter that Lisp does not occupy the place OO languages do. There are not as many as might seem around here, just because most of them are around here.


What? Lispers don't hate OOP. Take Clojure for example: it is a FP lang, but deals with OO stuff better than OOP langs. Also: good craftsman knows how to choose a right tool for the job. And bloated OOP languages are often not the right tool.


Functional programming combines the power of abstract mathematics with the ease of understanding of abstract mathematics.

Now if you'll excuse me once I complete my Ph.D. dissertation I might be able to finish my text editor I'm programming in Haskell.


Haskell is not the only FP lang. Clojure, Elm and Elixir do not require understanding of abstract mathematics.

Beside that, basics of category theory is far easier to grok than OOP Design patterns.


> Beside that, basics of category theory is far easier to grok than OOP Design patterns.

I know high-school dropouts and graduates from DeVry University who understand design patterns. Category theory--even basic category theory--isn't taught to undergraduate math majors (except maybe at MIT to honors students).

I mean, I appreciate functional programmers who want to do it because it is hardcore... but that's why they do it: because it's hardcore.


> I appreciate functional programmers who want to do it because it is hardcore

That's a misconception. Functional programming makes so many things so much simpler. I know, Haskell can be seen as an impractical exercise for the sake of tickling your prefrontal cortex. But again: there are other functional languages, e.g.: Clojure - it is simple and allows you to get things done in a pragmatic, reasonable and relatively quick fashion.


Recursion is the worst way to do iteration because it requires cleverness (= harder to debug) and it's ridiculously easy to cause a stack overflow. A for-loop is not going to cause an integer overflow on a 64bit architecture.

Closures are basically global variables and inherit all of the problems with global variables.

Continuations are basically goto's or memcpy and inherit all of the problems with goto's and memcpy.

Functional programming is ALL of the bad programming practices of the '80s. But because it's more abstract and mathematical it gets a free pass. Nevermind the fact that computer science is not mathematics. And the benefits of mathematical abstraction DO NOT TRANSFER if they inherently violate the commandments of bad ideas like global variables and goto's.

There are solutions to ALL OF the object oriented problems--including the diamond problem, circle-ellipse problem, and overuse of type hierarchies. There are no solutions to the functional programming problems because they inherently incorporate bad ideas described in paragraphs 1, 2, & 3.


> Recursion is the worst way to do iteration because it requires cleverness

No, it doesn't; in fact, recursion is usually much easier to reason about, particularly in terms of termination.

> and it's ridiculously easy to cause a stack overflow.

It's ridiculously easy not to, just assure your recursive calls are in tail position and you are using a language that supports tail call optimization (which essentially every dedicated FP language and a number of others do.)

> Closures are basically global variables

No, they aren't even similar.

> Continuations are basically goto's

Not really. They are similar in that they are a more powerful (and, therefore, dangerous if used incorrectly) flow control concept than simple function calls, but they aren't equivalent to gotos.

> Functional programming is ALL of the bad programming practices of the '80s.

No, that's unstructured imperative programming (though some structured languages preserved some of them.)


Closures are as hard to reason about as global variables are. Anywhere the function with the closed variable is called, that is the same thing as invoking a global variable, because that function could be called anywhere in the entire program. The difference is that the garbage collector deletes it when no function is calling it any longer.

Continuations are equivalent to memcpy... which should almost never be used. And tail-optimized recursive functions are still difficult to reason about.

And tail-call optimization (making stack calls constant) is... kind of self-defeating... because the only advantage of recursion is that it has a stack built into the function call/return operations. Anything a recursive algorithm can do is equivalent just a for loop and a stack data structure. And if you're tail-call optimizing it so that it has a stack parameter... then you're just using a clever way to write a for-loop :D Except functional doesn't allow for you to mutate a preexisting stack so you're also wasting space recreating a new stack each function call.

Yeah. Abstract math almost rarely transfers directly to the computer world. There's a reason why the pure functional & concurrent language Erlang was only used in telecoms and languished in obscurity: because that was the only domain it worked well in.


Err, I meant setjmp, not memcpy. Quite the faux pas!


Holy shit. You're into something here. Hey, you should immediately send letters to companies like Wallmart, Apple, AT&T, etc. most of them are in Fortune 500 list. They all use functional programming languages. You could save them billions!


Computer science uses a completely different system of logic than mathematics or mathematics-inspired paradigms like functional.

Fortune 500 companies aren't attracted to functional. They're attracted to the TYPE systems of many functional languages. And type systems are good. But you can build e.g. a dialect of C or Pascal using those exact same type systems with no functional programming at all.


Oh yeah? How do you explain then proliferation of FP idioms and libraries in PLs like Javascript and Python? Or success of Clojure and Elixir?


Python tries to move away from functional idioms whenever the developers realize that the idiom can be severed from the paradigm. E.g. Python's switch from filter(list, p) to [x for x in list if p]. Guido van Rossum feels that the imperative way is easier to understand and so do I.

Functional programming languages have good ideas, like type systems or lazy evaluation. But many of these ideas are NOT INHERENT to the paradigm. Perl6 incorporates almost every functional programming idea but as an imperative language.


I found this video by Brian Will was interesting when I was trying to learn about OOP[1]. It begins with some arguments, and there is a series of videos that follow where he attempts to illustrate the point. One of his newest videos is about an alternative form of OOP that he thinks works better.

[1] https://www.youtube.com/watch?v=QM1iUe6IofM

I assume that OOP is only meant for very large programs, and he unfortunately only demonstrates his ideas in smaller programs.

edit: added video author's name


Sorry that this is a bit meta, but I don't think links from thenewstack.io are good to post because of their Yelp-like tactics. They have a history of shaking companies down for cash. Years ago I worked at a company that was paying them tens of thousands per year, and when we stopped that because we saw absolutely no measurable return, they retaliated by writing an unfriendly article.

They are scummy, and I hope you don't learn this the hard way!


My take on why OO - as usually taught and practiced - is problematic: https://link.medium.com/GEgk2vjLpZ In short: it requires a lot of up front design and abstract thinking, plus the idea to architecture a program around state distributed over a large number of objects is fundamentally flawed.


These discussions are a bit tiresome. The major reason is of course "because its popular." There's plenty of merit to OOP and there are certainly pitfalls. OOP dominates the popular languages so you see the pitfalls most frequently. The grass is always greener, etc.

It's not like any of these discussions about time to deliver or complexity are based on statistical analysis.


I've grown so weary of dogmatism. Absolute answers are pretty much never universally correct.

OOP is good for some things. FP is good for some things. Usually a project has both kinds of things. If you ask me, pick a language like Rust, JS, Scala, or Python that's competent in both and use the pattern that's right for what you're trying to express in any given piece of code.


Because it’s a constant mental struggle to keep everything in mind when doing “classic OOP”. Inheritance, mutability, weak type systems with nulls. All that adds to the mental overhead.

It is possible to program in a classical OO language without so much struggle but it requires a level of discipline that takes 10 years or more to aquire.


I like the old metaphor of languages that are "programmer amplifiers" (usually applied to Lisp, Forth, Smalltalk...). But the amplifier increases the noise and problems as well as the good parts.


For me, it's:

1. Because OOP lends itself more towards bloat with lots of boilerplate to obscure what happens.

2. Often stuck with what somebody else thought were the best abstractions - while they weren't, or they were for them but aren't for what I'm working on.


You seem to be describing Java. Very often people complaining about OO are really complaining about Java.

Java code is necessarily bad code, or at best mediocre code.


Also a bunch of object-oriented C++. To be fair - I've never written C# or SmallTalk, so I don't know what those are like.


People are less complaining about OOP itself and complaining about inheritance and polymorphism. If you've spent any time trying to figure out which method is called by a python object then you'd generally agree.


Dead end career "architects" going OO crazy with inheritance, factories, unessicary interfaces, ... instead of helping write more performance code or deleting code.


Another day, another entry in that one decades old flamewar.

Each paradigm has its place.


If you wish to learn French, Rust, Chinese or Java you can pick up a rule book in your native language and get a grasp of the language.

If you wish to study code, what most people don't realize is that it's a new language. It doesn't matter if it's written in a language you're familiar with. You need to understand the domain, you need to understand the structure of the code, the naming of the variables. That's a whole other language. Most developers have to infer this new language by reading code. Imagine learning a new language by reading a book written in that language. That's why Domain Driven Design is popular in the OOP camp, talks about Ubiquitous languages and bounded context. That's why DSL is popular in the functional camp, why Lisp fans are crazy about Macros. It doesn't matter what the language is, if you want to make your code a joy to work with. You need adequate documentation of the code structure before folks dive into the code.

Imagine a student with no experience trying to learn about the Linux kernel strictly by reading the source code compared to a student who first reads a book on the Linux kernel then dives into the code.

This is what APL was hoping to solve and was pushing by "notation as a tool of thought", but then it brought it own set of problem. APL is like Chinese, Chinese notation is a tool of thought. You can read one Kanji and gain so much from it, but you have to learn the damn notation first.

The only language that comes close is Declarative logical languages like Prolog, but yet they suffer from some of the other faults and only holds logically in closed-world assumption. Written by humans, so improper naming can misleading too.

What's the solution for all? NOTHING! Some Developers will always hate on whatever you use if you make them work harder than they want to.

With that said, the article was spurned by Ilya Suzdalnitski, who only seems to care about medium claps. He wrote an article [1] about how OOP is a trillion dollar disaster, then in about 2 weeks about [2] functional programming sucks and is a toy and his solution was OOP. This is someone who's happy to straddle the fence so long as he gets views.

1 https://medium.com/better-programming/object-oriented-progra...

2 https://medium.com/better-programming/fp-toy-7f52ea0a947e?so...


> then in about 2 weeks about [2] functional programming sucks and is a toy and his solution was OOP.

which is clearly satire:

"PS> As most of you have guessed, the post is a satire. To all of the new developers out there — don’t take this seriously, FP is great!"


Classes work well as interfaces to data. Thinking of a class as a mini database makes a lot of sense.

Putting in anything complex that isn't directly related to managing the data contained is where things start to fall apart.

Data formats are great. Simple data formats don't have to have dependencies, just an interface to take out the details of accessing it. This is how software communicates. Don't forget that the most used interprocess communication is the file system.

Classes are not modular just because they can be wrapped up between two braces. Dependencies destroy modularity and this definitely includes inheritance. Complex data transformations that communicate using data formats they both understand are how programs are made modular. Classes are a very useful way of dealing with data formats.


It's an excuse to over-engineer. OOP encourages developers to over-think object hierarchies and build baroque code that is prematurely generalized.

There is nothing intrinsically wrong with OOP and it maps well to certain problem domains. The problem is the tendency of sophomore programmers to over-engineer everything.


> OOP encourages developers to over-think object hierarchies and build baroque code that is prematurely generalized.

How does it encourage exactly? In my experience sophomoric programmers are just as capable of over-engineering in a functional environment. Seems more a function of cargo-cult behavior than anything else.


In my experience, the first thing people do in an OOP language is name their first classes. Like maybe you have a Twitter crawler so you create a Crawler class and stick state into it. And maybe you create a few more arbitrary classes that collaborate arbitrarily, and it's like name-driven development. State sprinkled around. And these nouns basically solidify as abstractions that last for the life of the project.

FP on the other hand inverted this experience for me. I thought of the data first and merely built a pipeline on that data. Didn't have to invent nouns and classes. Just transformations on that data.

OOP has most people inventing abstractions on day 1 like classes by its very nature. It never felt like the right way to approach a problem to me. And of course, on HN there's an endless amount of "well, OOP is just misunderstood, you don't have to do it like that" which just never syncs up with my real life experience.


Yes. Humans tend to everyday-think in terms of agents with identity. Take that mentality over to programming and you will end up with coupling and state, typical OOP spaghetti.

Of course, good OOProgrammers don't do this.

I remember reading C++ intro books in my early teens. Class Train prints choo choo. Class Car says honk honk. Both inherit Vehicle. I thought, why do they do this dumb shit and how is it related to programming?


> I remember reading C++ intro books in my early teens. Class Train prints choo choo. Class Car says honk honk. Both inherit Vehicle. I thought, why do they do this dumb shit and how is it related to programming?

As someone who was once a known C++ and OOP zealot that later found their way to functional programming and now sits middle of the road on things, I am curious how exactly you would go about implementing polymorphism in C++ for, say a video game, where different entities had different sounds such that a single function can play the sounds of many entities without using this "train is a vehicle" and "car is a vehicle" text-book inheritance?

I'm not saying I cannot think of other ways to do it, but I am not really sure what the deficiency in your example actually is -- that seems like a pretty straightforward use of inheritance, and I can see it being useful in say a game as I made an example of.


Sure; inheritance is a relatively efficient way to implement single dispatch polymorphism in C++. Not much more to it. Thinking about it in that way has made my life easier.

Not that there's anything wrong as such in the example, it is just somehow all the examples were so inane.


In that case, does it need to be a property of the class to play the "Toot toot" wav file instead of the "Honk honk" wav file? Or is that a property of the object. Distinct classes make sense when they have distinct behaviors, in this case they have the same behavior (play a sound) but with a different effect (tooting or honking). Make it a parameter to the class constructor.

Would you make distinct classes based on the amount of damage creatures could inflict on the PC? Or would you make a parameterized class that allows the amount of damage to be changed?


I was thinking definitely of more complicated sounds than simply playing a static wave form stored in a file. I absolutely agree if that was the case, it's very simple to parameterize it.

In a more complicated situation, where perhaps you want some sounds to be simple wave forms and some sounds to be more complex sequences of wave forms mixed in some manner, then I was envisioning a `SoundEffect` base class from which simple and complex sounds could be derived, providing a single-dispatch polymorphism.

A `SoundEffect` instance (or more likely mapping of names to sound effects) would certainly be a property of whatever entity contained it... but the `CarHornSoundEffect` is a `SoundEffect` and `TrainHornSoundEffect` is a `SoundEffect` relationship remains a straightforward and effective use of inheritance to provide single-dispatch polymorphism.


> In my experience, the first thing people do in an OOP language is name their first classes. Like maybe you have a Twitter crawler so you create a Crawler class and stick state into it. And maybe you create a few more arbitrary classes that collaborate arbitrarily, and it's like name-driven development. State sprinkled around. And these nouns basically solidify as abstractions that last for the life of the project.

FP was and really still is a more of a "nerdy" thing in the wider world, so perhaps your seeing good FP vs bad OOP is more a function of the people involved. Think of the kind of persons who learn one language (Java / C#) at university with a "Cs get degrees" mentality about their skillset.

> I thought of the data first and merely built a pipeline on that data.

One can do this in any paradigm. One must also realize that this isn't a common (enough) refrain and that there is also a large difference between knowing it and knowing it.

With that said, I ask again: What about OOP specifically encourages terrible design?


How does one "over-engineer" a functional environment? By breaking functions down into even smaller pieces? By minimizing side-effects even further than before? These are all good things to do no matter what.

My one and only issue that I have with the "design"-side of FP is that it can be tough to organize multiple functions in an intuitive way. (The "soup of functions" effect, essentially).

It makes coming on to a new codebase tedious to trace, but I'd still rather than dealing with spaghetti OOP.


You can over-engineer point free style to create an uncomprehensible code base. Point free style with more than 1 unapplied parameter is basically a nightmare.

https://wiki.haskell.org/Pointfree#Tool_support


> How does one "over-engineer" a functional environment? By breaking functions down into even smaller pieces? By minimizing side-effects even further than before? These are all good things to do no matter what.

Off hand, overly tight coupling of unrelated by superficially comparable functions. Of course one can also do this in OOP environments as well. This is related to "God Functions" that are meant to handle way too much.

There's also those who fundamentally fail to understand what the point of FP is. The belief that it's just "my function isn't in a class."


Anything can be over-engineered, simply by violating Occam's Razor. Bad programs are filled with stuff operating on other stuff that is not essential. Programs in Lisp and Haskell do this as much as in any language, because few love simplicity.


Perhaps by generalizing to the point of futility


OOP does not encourage this. How people apply OOP is the problem. Most of the time when people complain about OOP they are really complaining about Enterprise Programming's knack for trying to design Neo[0] systems that can absorb absolutely any change request. This is how you get Java stack traces that scroll for 20 screen lengths.

[0] 'The One' true architecture.


Totally agree on that. Premature generalization is the root of all evil. It breaks the YAGNI principle. Over-engineering is often seen in java programming, because creating a new class is so easy, that people often create huge amount of ridiculously small classes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: