Hacker News new | past | comments | ask | show | jobs | submit login

I love "Unlearning Object-Oriented Programming".

I've seen so much criticism of OOP, can somebody provide more details/valuable links why "OOP is bad"? Honest question.

I'm a fan of Yegge's colorful and fun to read narrative explaining the issue: http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom...

Essentially, OO forces you to think of everything as an object. That pattern is valuable at times, but the way you think about something and the patterns you're familiar with intrinsically shape your approach and how you can solve it which can be a bad thing.

OO is not bad; it's merely bad to only use OO, to teach it as the only methodology in schools, to use it to the exclusion of all else. Like most things, OO is just one of many options and is not always the best one.

Interesting interpretation. I liked this quote simply because it struck me as peculiar yet completely true:

> the Functional Kingdoms must look with disdain upon each other, and make mutual war when they have nothing better to do.

The class-driven polymorphism/encapsulation/inheritance model of OO which is sugar over imperative programming has often been criticized: http://harmful.cat-v.org/software/OO_programming/

OO as the idea of late bound, dynamically dispatched objects that receive messages, retain state, maintain access and location transparency and optionally delegate to other objects... that's not really controversial, but also rarely done. Nevertheless, most great research operating systems have been object-oriented, and this is no coincidence.

> that's not really controversial

This is the most controversial part of it. I've got no problems with classes, objects, imperative methods, dynamic dispatch and all that - these are just semantic building blocks, sometimes useful.

What is really damaging is this very notion of representing the real world problems in their infinitely diverse complexity as something so primitive and narrow as communicating objects.

They're not primitive and narrow, that's the point. They're blobs of state that can masquerade under any interface - files, devices, windows or whatnot, all while having a common primacy so they are easily introspected, cloned and extended. In the Spring OS, for instance, you have the ability to compose or create entirely new names from existing primitives like environment variables, sockets, files, etc. by operations equivalent to set manipulation.

> that can masquerade under any interface - files, devices, windows or whatnot

Why are you sledgehammering all these entities into this primitive and narrow way of thinking in the first place?

They are different. And in order for different entities to have some common properties you don't have to think of them as a hierarchical relation of communicating objects.

Treat them too differently without any uniformity, and you get a hodge-podge system of names that can barely interact.

An object here is just a blob of state that enforces a primal uniformity. Files still look, walk and quack like files, they're just represented under a common construct underneath which is immensely useful for the programmer and irrelevant to the end user.

OO is not the only robust way of representing uniform interfaces. OO preachers must stop claiming ownership of the concepts that existed long before OO, and are much better represented outside of thr OO blindfolded way of thinking.

I'm not an OO preacher, I'm just making the observation that most research systems designed for uniformity were object-oriented. Otherwise when it comes to languages, I currently mostly use ones based on functions and pattern matching.

You can't really speak of the "OO blindfolded way of thinking" when all you do is contradict without any substance.

Unix is fairly uniform, and not object-oriented. Plan9 is even more uniform and still not object-oriented. Lisp machines did not feature too much of an OO neither, and yet had a very unified interface to all the OS entities.

Other than that, OS research is nearly dead anyway, nothing interesting happens, besides probably Singularity, which is also not very OO - it does not play well alongside with static analysis anyway.

Unix is fairly uniform



Compared to Windows, maybe, but in the grand scheme of things it is not. Hence so many research systems like Amoeba, Spring, Sprite, SPIN and even GNU Hurd that tried to create a general overarching metaphor across Unix's not-quite-uniformity.

Plan9 is even more uniform and still not object-oriented.

9P is a transport-agnostic object protocol for representing resources as synthetic file systems. The trick many have with Plan 9 is that they import their prior knowledge of what a "file" is, but in reality Plan 9's concept of a file is quite different from other systems. This is right down to the Plan 9 kernel being a multiplexer for I/O over 9P.

Either way, my point is the "object" metaphor isn't really all that specific. Personally I'd love to see work on functional operating systems, but other than the boring House system, I don't think there's been that many.

> Either way, my point is the "object" metaphor isn't really all that specific

I still don't see how Plan9 is built on "objects".

> functional operating systems

I doubt functional metaphors are of any use in this domain.

My point is that the abstraction continuum is far more diverse and deep than something as stupid as communicating objects or reducing lambda terms. My bet is on a linguistic abstraction, which covers interface unification as well as many other things.

In real world business logic and modeling you have to work really hard to make it fit into an OO paradigm. It also gets hard to manage long term as your data structures kind of get stuck with all the restrictions that OO allows you to make. Javascript object literals are fantastic as they make data transfer objects simple, fast and not polluted by business logic. Once you get used to passing around DTOs that way then a functional style of programming becomes more useful. For the most part programming is getting data from one domain data model and then converting to another domain data model, OO doesn't really help there, and if anything is a hindrance. Unfortunately software engineers love to abstract to the nth degree so sometimes you get stuck with tight OO models of DTOs and transformation layers.

I think one of Rich Hickey's quotes expands on your point:

It has always been an unfortunate characteristic of using classes for application domain information that it resulted in information being hidden behind class-specific micro-languages, e.g. even the seemingly harmless employee.getName() is a custom interface to data. Putting information in such classes is a problem, much like having every book being written in a different language would be a problem. You can no longer take a generic approach to information processing. This results in an explosion of needless specificity, and a dearth of reuse.

This is why Clojure has always encouraged putting such information in maps, and that advice doesn't change with datatypes. By using defrecord you get generically manipulable information, plus the added benefits of type-driven polymorphism, and the structural efficiencies of fields. OTOH, it makes no sense for a datatype that defines a collection like vector to have a default implementation of map, thus deftype is suitable for defining such programming constructs.

I think the most dangerous advice I got was to avoid DTOs and similarly domain-behaviour-free object graphs. That corner of the .NET world at the time called such a model an 'anaemic domain'. Instead, we were encouraged to model problems with objects for every noun, methods for every verb, arguments for every adjective, and all the code dealing with them attached to the class.

It worked fine while our domain remainded small, but got ghastly quickly.

It's a way to keep doing all of the bad habits you learned from completely unstructured procedural code, including global state, but in a way which is marginally more encapsulated and therefore more likely to last longer than six months in the field, and might even survive maintenance.

It adds to this all of the extra oddness that object hierarchies bring, especially the oddness related to trying to force non-hierarchical concepts into a hierarchy. Mediate on whether an ellipse is a special case of a circle or vice-versa until you reach enlightenment, which is realizing that the question is stupid and that any situation which forces you to answer it is stupid.

I really just feel it's because most people who use OOP end up increasing the complexity of their code and have no idea how to use it to make things simpler, safer and more reliable.

It was like a lightning had struck when I realized that if you did OOP in a certain way you could make guarantees that some mistakes would be impossible to make, and that whatever I had been doing for the past several years was not even remotely OOP, even though everything was in a class.

The reason I'm interested in learning functional programming is because it seems like it too holds the promise of eliminating errors entirely, if done well, perhaps not in a mutually exclusive way.

Over the years I've come to believe that OOP is primarily an elaborate mechanism for keeping the lunatic in the next cubicle from shooting himself in your foot. Assuming the overall architecture is relatively sane, the lunatic can safely do whatever weird nonsense he like in the privacy of his own objects.

If all the developers on your team are sane, then OOP is just a bunch of unnecessarily verbose conceptual complexity.

(The unfortunate corollary, of course, is that the code from the lunatic in the next cubicle is just as likely to be your own code from six months ago)

Like all other wildly successful software paradigms, OOP is now out of favor.

There have been successful languages (from a popularity standpoint) that are regarded as OOP, but the success of OOP as a paradigm is highly debatable.

Huh? Imperative programming is still the dominant programming paradigm, as it's been since the first programmable computers.

This is the video that really made me cut down on the use of OO in my own code.


It's about python but applicable to most languages.

It tends to be excessively verbose and often requires detailed naming of temporary objects when solving stupid simple problems like pipelines and data aggregation. Inheritance is often not the best way to compose properties. People who learn only OOP tend to not see the obvious value and convenience of first-class functions.

OOP is about large scale, and FP is about small scale. They best work together.

> OOP is about large scale, and FP is about small scale

Sorry, but this is an extremely audacious claim to make without backing it up.

I also think "FP is about small scale" is quite wrong, looking at empirical evidence.

If you are interested in this topic, I recommend you reading Martin Odersky's papers on Scala design.

Maybe you can just let it go and think of it as an "opinion" or based on years of experience? Doubt you will find a journal article laying the foundations you desire.

Modularity is about the large scale. OOP should not highjack the exclusive rights to the tools introduced long before it became popular.

I'd always prefer the SML module system to anything OOP for a project of any scale.

>I'd always prefer the SML module system to anything OOP for a project of any scale.

There's a problem. I don't know any project which was implemented with SML on scale (I mean at least several MLOC).

Ok, choose the Ada modules instead. And Ada was used for many MLOC scale projects.

I don't have experience with Ada, but I suspect, all its cool modular features can be expressed in Java (and sure in Scala).

Could you give examples of features that you would like to have in modern OO languages which are available in Ada?

I am not talking about any "cool features" in particular. My point is that for the large systems you need a decent module system and namespaces, not the OOP.

The fact that most of the statically typed OO languages also provide these features does not mean that OO is suitable for scalability - this is totally orthogonal to OOP.

This is a fallacy similar to attributing pattern matching and ADTs to the functional programming, while these features are not inherently functional in any way.

Try to build a large system in a dynamically typed OO language to see why OO per se is of no use for a large scale architecture at all.

I think, you are making mistake here. You shouldn't treat objects as objects, but as instances of modules. If you see it this way, everything looks much better in OO languages.

Why do I need any OO in the first place, if the only value they can offer is in a poor man substitute for modules? I'd rather use the proper modules instead, with generic (i.e., inherently anti-OO) features.

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact