Hacker News new | comments | show | ask | jobs | submit login
Does OO really match the way we think (1997) [pdf] (leshatton.org)
104 points by tjalfi 104 days ago | hide | past | web | favorite | 239 comments



I think the big problem with OOP is that it's designed to simulate the real world but has the real world backwards.

Real world objects are a composition of parts. The taxonomy of those objects, and its constituent parts, is am artificial construct independent of how those objects are composed. A frog is a frog because it fits some definition that experts agree on. The taxonomy comes afterwards - the composition just is.

What we have is a model where the taxonomy comes first. We say a frog inherits from an amphibian. What makes an amphibian an amphibian doesn't matter in OOP. Further, the language rarely provides the same means for composition as it does for inheritance and taxonomy. That's a problem.

In fact, the only reason why a frog is an amphibian is because it has parts all amphibians share. The label comes afterwards, after looking at the constituent parts.

In this way, OOP would benefit from composition-first, taxonomy after. An object is a thing because of what it has (and this includes behaviour too), not because of what it is or descended from.

It should be entirely possible for me to construct an object that fits the definition of a frog without specifying I'm creating a frog.


I think you'd benefit from taking a look at Entity Component Systems. Generally used for games, this programming pattern assigns behaviour to entities based on the components that they have. Moreover, neither entities nor components on their own actually own any behaviour. Behaviour occurs by systems that operate on entities with the associated component "signature".

For example, an entity with a position and a velocity component (position could hold x,y,z values and velocity vx,vy,vz values) could be operated on by a movementSystem (which adds behaviour to any entities that have both a position and velocity component, and updates position values based on velocity values.)

The nice thing about such a system is that the taxonomy as you call it gets defined when you instantiate the systems of your program. If you have entities with position and velocity components but dont want to use a movementsystem to operate on them, you simply dont instantiate the movementsystem. Pretty flexible, but not used excessively outside of video games.


The problem with this approach is that systems will need info from multiple components, so at some point you are forced to make the coupling that the ECS was brought in to avoid. For example, the rendering system needs to know entity position. So to which system does the "position component" belong? Not all entities that have a position needs to be rendered, and neither do all renderable entities require movement, and this same issue permeates all aspects of the architecture.


I suggest you read more into component signatures and how systems decide which components to operate on.

Specifically, systems do not own components. Nor do they operate on full entities. Nor is there any coupling.

If you have entities that need to be rendered, you would give them a render component. A render system would render all render components that exist (thus 'rendering the entities' for whom those render components belong to)

If you have entities with positions but not render components, the render system would not operate on those entities.

Similarly if an entity has a render component but no, say, velocity component, it would not be registered to a movementSystem.

Systems that need to propagate data between components would have component signatures that include those components for which they need the data of. So a positionRender system would have an aspect of [PositionComp, RenderComp] and would be the only system to copy over data from one to the other.

There will always have to be a place that decides what object has which components. This should be as close as possible to the 'aggregate root' (a term from DI), and generally in ECS at the point of instantiation of the entity. I believe this question is prompted by a fundamental misunderstanding of ECS.


Components do not belong to a system. Any system can access any component. Think of RDBMS stored procedures that operate on all rows matching certain criteria; don't think OO style encapsulated "methods".

The system only operates on entities that have all the components they need. So there won't be one rendering system but several: one for entities that require movement, one for those which do not, etc.


Almost right. You wouldn't necessarily have a separate rendering system for entities that require movement and those that do not.

Instead you would likely have an additional system that operates on entities that have both position and render components and propagates updates from position to render data. The render system for all entities with just a render component would then remain the same.

This can be imagined as systems that work on signatures (a component set or aspect that specifies whether an entity's components match the system's criteria).

A render system would have an aspect of say [RenderComp] and a positionRender system would have an aspect of say [RenderComp, PositionComp]. Note that the positionRender system would not actually render anything, only serving as a propagater of data.


Why does a component need to belong to a system? Why can't the component belong to itself and just be used by multiple systems?


Right, this is the general approach of ECS.


ECS as you describe it sounds like a good idea, but notice how it is a step away from OO which was supposed to keep code and data in a single encapuslated thing called an "object".

Now our componenents are nearly just plain-old-data and entities are slightly more sophisticated data-structures for tying them together. All the code is now kept is "systems", which are starting to look very much like plain-old modules.

No doubt the OO language will give you some useful tidbits, so that you can have slightly non-trivial getter setters and whatnot. But the design ends up fundamentally procedural -- which I say is good thing.


Yes, ECS is very much not OOP.


Thank you - I think this was probably the inspiration or seed for what I was thinking :)

I was amazed when I saw the ECS architecture when I filled my toe into game development.

I remember wanting to apply it to other areas outside of game development.

It's interesting though that a paradigm that intends to help simulate the real world has been eschewed for speed and maintainability in favour of something like ECS in game development, which is all about simulation.


Yeah it's an interesting pattern and even when implemented in an OOP language ignores most OOP ideas in favour of what is really more akin to data-driven design.

There's some great insight into how the pattern evolves in non-game projects here:

https://softwareengineering.stackexchange.com/a/306983/27143...


That's an awesome article! Thanks for sharing.


> I think the big problem with OOP is that it's designed to simulate the real world

I actually think that it's more designed to reflect (one particular idea of) the way humans construct mental models of the real world.

> Real world objects are a composition of parts

At the lowest level, sure, but if our aren't modelling quantum physics, the “parts” are abitrary artificial divisions (or aggregations), not part of the fundamental nature of the real world, which are no more fundamental than taxonomy (or identity) of the objects they compose.

> In fact, the only reason why a frog is an amphibian is because it has parts all amphibians share.

In a typical taxonomy, that's true, but it's not in a phylogenetic one, which is based on ancestry, not parts.

> It should be entirely possible for me to construct an object that fits the definition of a frog without specifying I'm creating a frog.

OOP allows this (and it's common in dynamic OOP languages, back to Smalltalk), though static class-based OOP (which treats inheritance as type heirarchy) relies on declaration to limit what an object can be used as.


Food for thought. Your comment shows a wide range of knowledge and covers things I haven't thought about.

I was reading about the history of Simula and how the majority of OOP features originated there. It was built as a simulation language, hence the name. I took from that it was designed to simulate the real world - I guess based on the below that's akin to simulation of mental models.

I did briefly think about the quantum mechanics rabbit hole. You make a good point. I'll think about this more. People can tell what a radio is because they have a general idea of what a radio does - has a speaker, volume, aerial. This is what led my thinking.

I'll think more about this.


>It should be entirely possible for me to construct an object that fits the definition of a frog without specifying I'm creating a frog.

This is known as structural typing and TypeScript does exactly this. Two objects with the same properties are the same.

This usually works well, but there are a few issues with this. Number one is related to optimization. The type system that results from using structural typing is unsound. This doesn't matter for typescript since it compiled down to JavaScript which has no types anyways, but for a typical VM executed language there's many optimizations the compiler can't do.

The second issue with structural typing is what I like to call the "mixin problem". If you get too flexible with composition it becomes close to having no type system at all.


Is it true that all structural type systems are unsound? I was under the impression this is a typescript specific problem.


I tried to find the answer and it seems the problem is much more complex than I thought. Rather than bullshit about it I'll just say I have no idea :)


Is structure defined just by type?

Like is all things which only have two integer properties considered the same?

{ Int age; Int weight; }

Would be equal to:

{ Int x; Int y; }

?


Close, any objects with the same properties and types are compatible. The name of the properties and their types need to match.

And this isn't always a good thing. If you have a function dealing with the property 'length' or 'value' for instance, you might as well have no type system because Typescript will allow you to pass in any object with those properties which is almost anything.

While the code will still compile and run like this the type system has failed its purpose of preventing developers from using the wrong types.


Given an object with a length property with the type a method expects, is there ever a scenario where the method is incorrect?

I'm trying to wrap my head around it: if a method needs only the length property, which is an unsigned int, is there any good reason why it shouldn't accept any object that has that property?

It appears at first that this is a good quality: the code is general whilst obviously avoiding any type errors, including null/undefined errors.


The type system is still working properly in that it will produce code that compiles and runs. For the developer such a behaviour can be misleading. This is compounded by IDE's with intellisense which will suggest passing random objects with the correct properties into the function.


I see, sounds interesting. Are the names namespaced? And how would a method know its compatible? Do you type the properties a method needs instead of the object name?


You pretty much act as you do with regular type systems, rarely creating types that are compatible by accident. There is one place however where you use structural typing constantly.

TypeScript is a superset of JavaScript and using "options" objects as method parameters is a common pattern. These options objects are synonymous with anonymous classes and don't have a name, only a shape. Structural typing is one of the strategies TypeScript uses to maintain compatibility with patterns in JavaScript that use these nameless objects.

There are no namespaces. Methods are compatible if they take the same or a subset of the parameters. Objects need to have matching property names and values.


Hum, thanks for the info. I feel curious to try it out. I think names should be namespaced though, that could solve the issue of having a length or value named property. The namespace could distinguish between two values that aren't compatible for example. Though I guess if the types are precise enough, they could embed some of that.


"In this way, OOP would benefit from composition-first, taxonomy after. An object is a thing because of what it has (and this includes behaviour too), not because of what it is or descended from."

A lot of working programmers have figured this out already. Most experienced programmers I know don't use much inheritance.


Unfortunately OOP languages typically lack sum types, the mathematical dual of product types (objects with just fields). This means subtyping and inheritance is often needed to encode sum types.


In functional languages it's easy to run into difficulties with extending a sum type without changing the defining module.


Exactly and the languages don't express that practice. We get design patterns / DI and maxims like composition over inheritance. Our languages still make it easier to inherit than to compose despite all that.

Plus, my wider point was that in the real world, which is the inspiration for OOP, the taxonomy is artificial and placed onto an object. The taxonomy isn't an inherent part of the object (i.e Dog extends Canidae). The taxonomy is given to an object based on its composition separately i.e a dog is a dog because of some property x, y and z and just so happens that X, y are what makes Canid a Canid and X is what makes a carnivore a carnivore.


> It should be entirely possible for me to construct an object that fits the definition of a frog without specifying I'm creating a frog.

The problem with this approach is that different domain experts will have different definitions of what "is" a frog. If you ask a theologist, a molecular biologist, and a zoologist to define a frog, you'll certainly get three different but valid answers.


That's just a problem of context. With the taxonomy separate, the objects can be different things in different contexts. With de facto OOP, you have no hope in hell because the taxonomy is hard coded to the structure of the object itself.

My point is supported by what you say, in fairness. A taxonomy is artificial and separate to the actual object itself. It can and is different based on who defines the taxonomy. My argument is that this is why OOP suffers: it makes the taxonomy inherent to the object and all best OO practice tries to work around that. "Composition over inheritance" anybody?


> That's just a problem of context. With the taxonomy separate, the objects can be different things in different contexts.

Except that you want to share properties between those different contexts, and this is where hell begins. For instance (for a more programming-related problem) say that you have a gui application with graphical objects that map to your frogs,say a frog pond simulation. Now months passes and the boss says "okay, seems that our software interests molecular biologists too, but they need to have another 'molecular frog view' with properties Foobigate and Brozigate to study them properly at the molecular level. Of course the 'molecular frog view' has to be in sync with the 'pond frog view', also they want a simple readable save format.". Now, how do you set up your code so that Joe programmer that comes next week can be operational on the code in three days ?


I love how much this comment makes me think. Thank you!

To be honest, this sounds like a hell hole no matter what you did. It's a complex program that services two domains.

I would say that in the real world, a frog studied in a pond and frog studied at a molecular level share the same properties. A frog is a complex object that has many parts. Naturally for a pond view, you provide the properties needed for that, through composition of smaller objects. If you want to study it on a molecular level, you are really studying the properties, the constituent objects, at a much more granular level (and perhaps more that you didn't need in the simple pond view). It's all still one object.

In fairness, this would be much worse to model in bog standard OOP. Could Joe programmer fair any better there too? Would Joe programmer be working on a macro/molecular simulation in the first place?


As I mentioned above, an Entity Component System would have no issue solving this. You'd add the molecular frog components (foobigate and brozigate) to wherever you create frog entities and then you'd also create and add a system that synchronizes between the entities that have pond frog components and these new components. None of the original code would change minus the instantiation function for a frog entity. Similarly simple change goes for the save format.


I guess what I'm calling for is a language that directly supports ECS :) it's currently a pattern that works well and any good pattern should be one a language feature.


That's not a "problem" - that just means you have different frogs - each with a well defined criteria.


One of the main points of design patterns is composition over inheritance. Composition is also a major part of functional programming.


Yep, exactly! We now have a bunch if OOP languages that provide language features for inheritance making it easy but composition is an emergent property entirely based on existing language features - it's not a language feature itself.

What I'm saying is flip it on its head. Build an OOP language where you have features to make composition easy and separate the taxonomy as a separate thing that shouldn't be tied directly to an object.


Would you say that Scala is close to hitting that target? I've only taken a cursory look at it, but its traits appear to satisfy composition and its type system looks pretty close to being a decoupled taxonomy.

I'm not sure what a full implementation of a separate taxonomy would look like. Duck typing with an algebraic type system that is only applied when you ask for it?


I haven't looked at Scala closely. I'll take a look - thanks for the tip :)

This is something I'm thinking about a lot. I've been wanting to sit down and sketch out what it would look like in terms of syntax / structure. Difficult with 3 kids, wife and a full time job.

I think behaviours should be able to specify taxonomy, which in effect is a name for saying "I only accept objects that do this and have that".

I envisioned a system where it's perfectly possible to work with purely anonymous taxonomies i.e your behaviours specify explicitly what its parameters have in terms of behaviour e.g if an object parsed a string, we could expect that the parameter has just the behaviours needed to do the work e.g length, substring, equality. Or we could just say it wants a string (which is just a separate alias that says an object has those things - like an interface except you don't have to declare you're implementing it).

I think there is value in being able to see an object you want to be part of a taxonomy fits the bill - this would be more like interfaces. Maybe it's better left to static analysis (stop! you're trying to pass an object that doesn't have behaviour X to behaviour Y that expects it).


I don't think what you're saying is really much at odds with many OOP languages. Unless I'm missing something, Ruby's mixins and duck-typing (care about what messages an object responds to rather than it's type) comes awfully close to what you're looking for.

Granted, something like duck-typing is more of a practice or pattern than something that's terribly well supported by the language (it's up to you to put safeguards that the object being passed to you does what you want it to do), but it's certainly possible and describes most well-written Ruby code.


Ruby influenced my opinion.

What I described above wasn't duck typing plus responds_to - maybe it sounds like a whole lot of sugar on that. It was keeping the taxonomy separate, with behavioural sharing through composition only. I was elaborating on how parameters would work - do you allow taxonomies there or keep it separate still?

It's at odds with Ruby because it's still easier to inherit than it is to compose an object. As much as I love the language for its flexibility, it is still based on the idea of coupling the taxonomy with the object itself.

I'm sure an emulation of what I describe is possible to a degree in Ruby.


>One of the main points of design patterns is composition over inheritance.

One of the opening pages of the Design Patterns book (the GoF book) has "Prefer composition over inheritance" (or words to that effect) as the only sentence on that page, right in the middle of it. My guess is that the GoF authors did that to emphasize the importance of that advice, maybe due to having seen too many brittle inheritance-laden hierarchies.

I mentioned this to a client I was consulting to, a couple of years ago, after I saw that he seemed to be using inheritance too liberally, and maybe without thinking if it was needed and the right thing to use for his specific needs. He did get the point, and changed his code from then on.


Different languages encourage different OOP styles e.g., Go language encourages composition.

Though people may argue on the meaning of OOP https://groups.google.com/forum/#!msg/golang-nuts/bSXry29pNo...

Direct link: http://www.purl.org/stefan_ram/pub/doc_kay_oop_en


OO solves some problems and creates others.

One must remember that the competing paradigm isn't functional programming but imperative procedural programming. Very large code bases in C, Fortran, Cobol aren't all that nice either.

It's unfortunate that ML and Lisp didn't gain greater traction but that's likely due to Unix.

Many of OO's flaws are being addressed and are even missing from new languages. Rust "feels" OO but isn't. If you write C#7 or Swift and avoid inheritance and prefer immutable types as long as possible - how OO is your code then?

The problem that OO solved is that it allows mediocre coders to iterate on large scale code bases until a problem is solved. This is no small benefit in industry. The drawback that the code is hard to maintain and prone to bugs is an acceptable one.

Modern multi paradigm languages such as F# or Rust are really the way forward I think. The Java/C++ way of OO was a good but expensive lesson.


> Rust "feels" OO but isn't. If you write C#7 or Swift and avoid inheritance and prefer immutable types as long as possible - how OO is your code then?

Both are 100% object-orientated (and in the case of Rust, also 100% FP is reachable I'd say. Not knowledgeable enough in C# to say). Do they have in-memory data structures with attached functions that do useful work with this object's state (not necessarily mutating it) ? Yes. Then OO.


Rust is neither OO nor FP, it's procedural (like C and Fortran) but borrows concepts from OO and FP. OO requires data structures to support dynamic dispatch, and although you can have dynamic dispatch in Rust via trait objects your code is heavily gimped - no generics, you can only pass by reference, lifetimes have to be explicitly annotated, etc. It feels like OOP-emulating C code from back in the day. Likewise, if you try to do pure functional programming in Rust you're going to have a bad time. Closures don't share types, and doing dynamic dispatch on them creates the same problems as trying to emulate OOP. In reality it's its own beast, much closer to C with templates than a crossbreed of C++ and Haskell.


I'm not sure where this meme on HN that Rust isn't OO comes from but it's based on a definition of OO that is almost never applied by day to day programmers and it's disagreed with by a fair number of language theory people.

Rust has encapsulation, message passing in the smalltalk/OO sense, and polymorphism. It does this through objects with attached methods, and traits. In every way that actually matters to a working programmer Rust is OO.


Contrary to mainstream knowledge there isn't one single truth of what OOP or FP actually are, rather multiple approaches of applying a set of abstract concepts.


This, a million times. Any (meta) discussion on OOP without giving a precise definition of it is futile, for the devil lies precisely in those details that are omitted from the discussion.


> it's procedural (like C and Fortran)

Wait... do you actually believe that? Tons of C software is OO through and through, has been since C existed.


There's more to the everyday definition object oriented than having functions operating on in-memory data structures. Two other features that are commonly included are subtyping through inheritance and encapsulation.

If we stick to your definition C functions operating on structs qualify as OO, and I don't think many would agree with that.


My first exposure to OOP was in C using structures to hold function pointers as a pattern. You can do OO in C without C being OO. There is a huge semantic difference between doing OO and a language at supports OO, the article is referring to the former rather than the latter.


> Do they have in-memory data structures with attached functions

What do you mean by "attached function"?


I meant that a semantic value in the code (a variable for instance) "carries" functions with it, as methods.

e.g. in a C API:

    struct foo {
      // dispatch
      void (*do_stuff)(int);
      int (*get_stuff)(const char*, void* context);
    };

    // attached functions
    void foo_do_operation1(foo*);
    int foo_do_operation2(foo*, int);

or C++ which allow to conflate both forms but restricts the first with inheritance

    struct foo {
        virtual void do_stuff() = 0;
        virtual int get_stuff(std::string, a_better_type_than_void_ptr&) = 0;
        void do_operation1();
        int do_operation2(int);
    };
      
or JS where everything can change whether you want it or not

    var v = new Object;
    v.do_stuff = function() { ... }
    v.get_stuff = function(str, something) { ... }
    v.do_operation1 = function() { /* v is available in the closure */ } 
    etc...
or OCaml modules:

    module foo = 
      struct
        let do_operation1 = ...
        let do_operation2 x = ...
    end

in every case, some state is mentioned first, and then some code is bound to this state; then, the language (except in C where it is only a convention... even though some people can get fairly "creative" with macros) provides a way to call the code with a given variable of type `foo`.


The article is about how C & Fortran ("traditional languages" by 1997 standards) compare to C++. This is not about functional- but procedural languages vs. OO. (!)


> It's unfortunate that ML and Lisp didn't gain greater traction but that's likely due to Unix.

Lisp and ML are competing by nature(MLs are famous from their typesystems) and their "failure" is not because of unix, but because of their inability to deliver 'fast' & readable apps. Also, the learning curve...


The performance claim doesn't really matter in a world where we use languages like Python, Ruby, PHP and Javascript for major applications. Also, compared to these languages, Common Lisp seems to generally be faster (and, on most implementation/architecture combinations, supports real parallelism).

http://benchmarksgame.alioth.debian.org/u64q/compare.php?lan... http://benchmarksgame.alioth.debian.org/u64q/compare.php?lan... http://benchmarksgame.alioth.debian.org/u64q/compare.php?lan... http://benchmarksgame.alioth.debian.org/u64q/compare.php?lan...


> The performance claim doesn't really matter in a world where we use languages like Python, Ruby, PHP and Javascript for major applications.

Yes, it does. And those languages are all providing negative user/developer experience.


>Lisp and ML are competing by nature(MLs are famous from their typesystems) and their "failure" is not because of unix, but because of their inability to deliver 'fast' & readable apps.

?!

Lisp under SBCL is as fast as Java under the latest Oracle JVM and for numeric computation is sometimes as fast as Fortran.

This means at least 10-20x faster than CPython, Ruby's MRI and others.

I routinely read third-party Lisp libraries and they are very readable, no problems with that.


> Lisp under SBCL is as fast as Java under the latest Oracle JVM and for numeric computation is sometimes as fast as Fortran.

Back it up.

> I routinely read third-party Lisp libraries and they are very readable, no problems with that.

Me too and they're all junk.


Wait, learning curve? Relative to C++ or Java?


You can write something that passes acceptance tests in C++ without a lot of effort. It's things like removing all the bugs, memory leaks, and undefined behaviors that requires enormous expertise. There is certainly a lot of ground to cover before C++ mastery can be had, but simple OOP and procedural programs aren't that bad.


> You can write something that passes acceptance tests in C++ without a lot of effort.

True for freshman-year

    std::cout << "Hello, world!" << std::endl
programs maybe. People's heads asplode when they get into C++ with any sophistication because they need to grasp several things at once: classes, inheritance and polymorphism, pointers (smart or not), value vs. reference types, etc. Java, JavaScript, and even Lisp go a long way toward hiding those details from the programmer.


Java is perhaps cumbersome to use at times (and especially in older versions), but it's not a complicated language, in fact it is a lot simpler than most.


It's not, however, simpler than ML or Lisp. Just more familiar.


In reaction to some of the comments popping up re. OOP and functional programming: I've come across quite a bit anti-OO sentiment in my career now, and not nearly as much anti-functional sentiment. I suspect that the prevalence of OOP has something to do with it - you're more likely to have been exposed to bad OO code than any other type simply because there's more of it. People often seem to draw an artificial distinction between OO and functional styles - but they're both just techniques that can be applied to solving problems and work well in combination. E.g. purely functional immutable objects seem to yield very readable and maintainable code. You can write bad code in any paradigm; but the more open you are to combining a variety of techniques, the more options you have for creating a good solution to the problem at hand.


> you're more likely to have been exposed to bad OO code than any other type simply because there's more of it

That's certainly true, but as someone who routinely has to go in and bugfix/maintain classes contaning thousands of lines of code (sometimes in themselves, more often collectively in their inheritance hierarchies), the simple presence of extensively used member variables often is a massive cognitive load. I'm not claiming that I'm some rockstar programmer who knows better, but from my experience working with other programmers, member variables might as well be called global variables.

tl;dr: Mutation, mutation everywhere.


A few years ago, most programming forums I visited considered OO the normal good way, and FP arcane, slow, and basically useless. This opinion seems to have turned. Basically Haskell was too cool to ignore so everyone played with it on the weekends, while Clojure, React, and Erlang promoted functional patterns for pragmatic system development...


Commenting more on this thread than the article: in my opinion, the idea that OO and functional are somehow at odds with each other is misguided. There is little about OO that doesn't mesh well with functional ideas, and vice versa.

The core concept of most OO designs is that data is coupled with the methods that operate on that data. There is absolutely no reason why that can't work with immutable data and pure functions. In fact, the String class in eg Java and C# works just like that: each method returns a new instance.

Scala, for all its warts and complexities, has a fantastic feature called case classes which encourage combining the best of OO with the best of functional. A case classes is both a class in the classical OO sense and an ADT. Most case classes are immutable.

This is easy enough to simulate in many other languages, even if they don't have case classes. For example, did you know Immutable.JS allows you to add methods to Record prototypes?


> The core concept of most OO designs is that data is coupled with the methods that operate on that data.

I've read this a lot, in this discussion and elsewhere, for years. But what does it mean? Does it mean data is syntactically coupled with state? Otherwise I don't get why pretty much any C program is an OO design by this definition.


Many C programs are OO by this particular definition :) I take the least ambitious definition of OO on purpose, because that definition is often the one that matters the most. I mean message passing yada yada, if you look at most OO software out in the wild, the key feature is the syntactical coupling between nouns and the verbs that operate on those nouns.

Extra bonus though, compared to eg C, is that in OO languages you don't need to spell out the type (plus polymorphism ofc). So String_trim(&myString) becomes myString->trim() and that's shorter and, most of the time, equally clear and less noisy.

I miss this most when coding Elixir.


Yes, exactly. OO is about encapsulation and associating behavior with state. FP is about working with immutable state. I have never understood why they are viewed as opposing. This discussion I participated in goes into that a little:

See my (geophile's) comment here: https://news.ycombinator.com/item?id=13850210


indeed, there has been a small essay on this: http://www.lispcast.com/object-oriented-vs-functional-duals


Cool esssay, but it is precisely not what I mean. What I'm talking about is that despite the duality between OOP and FP you can combine the two effectively.


No comment about OO in general, but I think Java/C# is a damn fine language design, avoiding many problems of newer paradigms. Here's an example:

1) Java: starts out with one opaque string type, because that's what OO methodology says. Changes the internals when needed. All clients continue to work forever.

2) Haskell: starts out with strings as exposed linked lists of characters, because functional methodology says exposed algebraic types are OK. Sticks with this mistake for many years, then recognizes that opaque would be better and introduces a new type Text. Can't change the old one or make them compatible. Moreover the new type exists in strict and lazy variants, because Haskell methodology says people want that.

3) Rust: starts out with two string types, because the language design suggests both should exist (String and &str).

It's not just about strings. The same happens with compilation speed, binary compatibility, etc. "Modern" languages get it wrong and can't fix it, Java 1.0 gets it right and it stays right.


The String and &str thing is a common criticism of Rust, but I feel it's a bit shallow when said without context. Maybe it's just me, but it makes sense when you consider Rust's ownership semantics (which are a new idea in language design, so you can't really look at how other languages solved issues arising from them).

Basically, when you have the notion of ownership/lifetime in a language, you have to make a distinction between an owned and a borrowed thing. When you pass an owned thing to a function, you give up your ownership of it, and can't use it anymore. When you pass a borrowed thing, you keep the ownership, and the compiler tries to prove that, based on ownership info it has, it lives long enough for the operation to be safe. Just as String and &str are owned-borrowed counterparts, you also have e.g. Vector<u8> and &[u8] (the former is an owned vector, the latter is a read-only, borrowed view into that vector) and many more examples of this dichotomy.

I don't see how this could have been solved in Rust without having those two types.


Those are good points, but really not an argument against FP. String could have originally been an abstract data type in Haskell. There are of course disadvantages to the Java string approach, in that there is potentially a lot of code duplication inside the String class. There will also always be some situations that will require you to unpack the string into an array of chars, because a particular method does not exist.

There are plenty of things that Java got wrong. Threads and concurrent programming in Java is particularly difficult to get right without expert knowledge. Haskell makes concurrent programming much easier. I've worked with both professionally and feel that Haskell got more things right.


Rust doesn't have two string types. They have one string literal type (str&) and a poorly named string buffer (String).


Actually, Rust has four string types: str/String, CStr/CString, OsStr/OsString, and Path/PathBuf.


Only str is in the language. The others are standard library types. Libraries may introduce more types too.

We almost made str a library type but the downsides outweighed the upsides.


> functional methodology says exposed algebraic types are OK

No, it absolutely doesn't. Functional methodology is absolutely orthogonal to data abstraction.


In the early days haskell favored conceptual elegance heavily above practicality.

I'd argue that java strings aren't perfect either, though. You can't seamlessly use the string interface for binary data or char arrays for instance. Also, java has various streaming implementations that all use completely different interfaces so maybe lazy java strings would have been helpful.


OOP did more to set back program design than anything else I can think of. It was incredibly wrong, has sent so many smart minds down a poor path. If for example functional has taken prominence in the 90s we'd be in a much better place now.


I have the polar opposite view. I love OOP because it provides a way to elegantly solve so many problems.

To take a concrete example, consider the undo-redo mechanism in a text editor. It seems natural to create a list of objects with each object representing the action that can be undone or redone, and a pointer to the most recent action object. Undoing executes the 'undo' method and moves the pointer to the previous action. Redoing moves the pointer to the next action and calls it's 'redo' method.

You would have a 'delete text' type of action which knows how to remove a particular chunk of text and put it back. You would have a 'change text colour' action which knows how to change text to a particular colour and revert back. You might have an 'insert jpeg' action which knows how to add a picture and remove it from the document.

Crucially, each action object in the list should encapsulate it's data and behaviour. It makes the undo-redo framework so much simpler if the framework knows nothing about each type of action, besides the fact that each action object has some kind of undo and redo behaviour.

I cannot see how you could get a simpler design without OO principles such as coupling the data and methods of each type of action, or without hiding the implementation details of each action from the undo-redo framework.


Heh, check out time-travel debugging and be amazed at how simple a much more powerful “undo” can be made when purity is involved (in FP langs, e.g. Elm or Haskell).

Seriously though, design wise there was not much you said that was specific to OO. Take for example:

- all actions are a data type

- the history is a list of actions (in LIFO style for the purpose of simplicity here)

- undo pops the list and puts undo in another list

- redo replays the undo list onto the history list

A bit more outlined, but nothing OO there, it was in fact entirely functional.


Already present in Smalltalk, Lisp and Mesa/Cedar development environments at Xerox PARC, no need for FP advocating for it.


I guess immutability is the actual thing that makes it easy, and arguably Lisp is also an FP lang. m Main point though was the ease of doing it in a pure language, almost trivial.


This is a way simpler approach vs the OO way described above. The OO way requires undo/redo to precisely cancel each other, which is extremely difficult to get right.


Isn't the real difficulty of undo the implementation of the undo method for each action? T The action list seems easy.


Store a list of actions, and represent your state as a sum of said actions. That way, no matter what the action is, "undoing" an action is literally just popping it off the list.

  actions = [append("The"), append(" quick"), append(" brown"), append(" fox")]
  state = reduce(actions, some_combiner) // This might give you a string "The quick brown fox"
If you pop the last element `append(" fox")` of the actions and rebuild the state, you can easily see that the new state "The quick brown" is simply the old state without the last action, i.e. an "undo".

If rebuilding the state is expensive, it is not hard to devise caches around state management under this model.


Makes sense. But i don't see how this is related to OO vs FP then. Putting all your actions into a list is easy. Replaying is easy. The main problem is caching the state which can become tricky no matter OO or FP.


You're right in that nothing about this is specific to OO or FP.

However, under this model, you really don't want your actions in the actions list to change after the fact. That means those actions in the list really shouldn't be modeled as "objects" with object-like mutable internal state and all. In fact, you'd probably need them to be as lightweight as possible, which means no sticking methods on them.

With those constraints in mind, OO-style code brings nothing to the table.


I think this unfortunately goes into the same direction a lot of political discussions tend to go. Two rigid philosophies and one has to win. In the real world either one can work if applied correctly. And both can learn from each other.

FP brings a lot of interesting concepts to the table that are useful. But I doubt that people that screw up OOP will screw up less with FP

In the end competent and disciplined (in my view the number one attribute of a good dev) people will produce good software with any paradigm. Clueless people will cause problems no matter what the tool is.


My arguments was actually just that it had nothing to do with OO at all :)

Caching becomes a lot easier though in a pure functional programming lang though, and in a lazy one happens automatically for a lot of things.


I pretty much used this exact same approach to implement undo/redo in a programmer's editor and found it worked well.

My code is pretty exactly as you describe, an undo and redo stack holding action objects and the interfaces to the action objects being nothing more than 'undo()' and 'redo()' methods.


In functional programming you would store actions in lists with funs/closures inside those data. For example in erlang list of undo actions would look like this:

  Undo = [ { RemoveJpeg, JpegData} ]
where RemoveJpeg is a fun (like closure) and JpegData is what would be given as argument to this fun. The implementing undo looks like this:

  undo_action(Document,[UndoHead|UndoTail]) ->
    {UndoFun,UndoData} = UndoHead,
    UndoFun(Document,UndoData).
Isn't it simple? RemoveJpeg just returns modified document. undo_action knows only that every action should take two arguments and return modified document.


In Smalltalk:

    RemoveJpeg <- [:data | ...].
    JpegData <- [:data | ...].
    Undo <- #( RemoveJpeg, JpegData ).
As possible implementation of undo could be

    undoAction: document with: undoList
    undoList do: [:aBlock| 
       aBlock value: document].


The fundamental feature that makes such an example easy is first-class functions. You are simply using objects to encode this and not making any use of their identify and mutable state.


> You are simply using objects to encode this and not making any use of their identify and mutable state.

Most software with undo-redo will show you some metadata related to the action (for instance "Undo 'Drag stuff'", "Redo 'Set text in bold'"...) so you have to have a way to associate metadata with the "undo / redo" functions; the class is a good tool for this.

Also, a common optimization is to alter the head of the stack instead of deleting / creating new commands when for instance you are moving an object in the GUI. e.g. say you have a command that moves a box at a specific position. If command objects are immutable, every time the mouse moves, you have to pop the head of your stack and push a new command instead (which may be a waste of resources); instead you can just have an "update" method which will change the position stored inside the command which is microscopic in comparison of malloc / free.


You can do that optimization in FP languages if you have unique types, ie, types that can only have one reference to them. You use the list as if it's immutable, by creating a new list with the new command and the rest of the previous list. But since the compiler can prove that the old list is no longer referenced, it can reuse it and simply make the update in-place.


could you show an actual example where commands are polymorphic types (e.g. they come from a separate DLL) and this optimization actually happens ? (the actual "updating" of the command doesn't need to cross DLL boundaries, eg a DLL both provides a command, instantiates it, and updates it. The command queue is instantiated by another DLL though).


I don't know what icebraining is talking about becaue AFAIK there aren't widely used languages with unique types, except perhaps Rust.


> If command objects are immutable, every time the mouse moves, you have to pop the head of your stack and push a new command instead (which may be a waste of resources)

Have you profiled that?! :)


yes, actually, it used to eat 30% of the CPU on a software I worked on because of malloc-fest when moving the mouse quickly across the screen and dragging multiple objects.


Apart from this being the prime example for the argument for immutable data (i.e. just keep the whole state of your editor at every undo point somewhere; restore to previous states when undoing) the "OO way" doesn't even need OO features, i.e. see typeclasses or traits.


ClojureScript talk, 26:52 is the demo but the whole video is worth watching and key to understanding how it works: https://m.youtube.com/watch?v=-I5ldi2aJTI


OO was a huge improvement in the 90s. But as with a lot of paradigms it got elevated to a religion and way overused. I still don't understand why a lot of teaching emphasizes inheritance.

The same would have happened with FP. People would have messed it up too.


But without inheritance what actually is OOP? Haskell has typeclasses. Rust has traits. Is there anything they lack before one could call programming in Haskell or Rust OOP?


Fad X did more to set back program design than anything else I can think of. It was incredibly wrong, has sent so many smart minds down a poor path. If for example Fad Y has taken prominence in the 90s we'd be in a much better place now.


Exactly!


> set back program design

No one stops you writing program in whatever style you want. If functional programming has that of edge over OO, it would win over the argument long ago.

I don't like you assume other people don't know what is best for themselves, and needs some top-down expert to light the path. You argument, if there is one, sounds really preachy and proves no other than the point that support of functional programming here in HN are more about identity politics than actually making it useful to the crowd.


> I don't like you assume other people don't know what is best for themselves, and needs some top-down expert to light the path.

I do love the juxtaposition of this line, with your following accusation of people on HN engaging in "identity politics".

In my view the argument "X can't be better than Y, otherwise we'd all be using X". Is wrong for the same reason that "K can't be a good idea for a company to start otherwise someone would've started it already."

Some things are too early, some things are marketed poorly, sometimes it's luck, some times is dedication in sticking with it. I strongly believe that markets aren't efficient and have lots of imbalances. I also strongly believe that life isn't efficient and has lots of imbalances.


You mean the functional path from Lisp with CLOS and FLAVORS?


Poe's law strikes again!


I love work like this - a real attempt to understand what was happening. Genuine scholarship.

But it illustrates the issues with this sort of work. This is really about C++ vs C/Pascal and not OO vs Proc. Also this is 1997 C++ which I earned a living from for a brief moment, and I can attest, it was horrible, and very different from 2017 C++.

From my perspective today : we need to separate parametric and polymorphic oo in our thinking and the big problem is that programmers use these tools to the limit (creating objects with 3+ parameters or deep inheritance hierarchies) and weave silly complexity into the code base .

In a commercial setting the biggest impact I have seen is a project that became impossible to staff because as soon as the developers saw the code base they started looking for other work. It was really good code developed by a two brilliant people initially and then three or four more smart ones as the project grew, but when the lead left it was just too complex - due to prolific use of generics and a obsession with ORM.


> This is really about C++ vs C/Pascal and not OO vs Proc. Also this is 1997 C++...

I can imagine the Smalltalk community rolling its collective eyes and saying "that's not OO..."

My issue with the primary study is that the two programs are not really comparable: a think a C++ parser was a harder problem than a C parser (even given that it is 1997's C++), and this is backed up in the study:

"The C++ parser by further contrast is a recursive descent parser owing to the intractability of parsing C++ using 1 token-lookahead grammars. In other words, they are very different products..."

In mitigation, it says that the C++ parser, as used in the comparison, did not parse the full language, thereby presumably reducing the difference in the complexity of the requirements, but I think the difference in architecture alone is enough to raise doubts over the significance of the results.


C++'s implementation of OO isn't that good to begin with, mainly due to compatibility concerns. Perhaps we should look for a better OO language for comparison, say Smalltalk.


Indeed! And not every language that claims to be OO actually follows the basic ideas [1,2]. So, Smalltalk, as e.g Squeak [3] or Pharo [4] is a good place to start looking and learning but also Erlang [5] and Clojure [6]. State is necessary not "evil" but needs to be managed well. And most OO-languages have, apart from the assignment operation, no abstractions for.

[1] http://wiki.c2.com/?AlanKayOnMessaging [2] http://worrydream.com/EarlyHistoryOfSmalltalk/ [3] http://squeak.org/ [4] https://pharo.org/ [5] http://www.infoq.com/interviews/johnson-armstrong-oop [6] http://thinkrelevance.com/blog/2013/11/07/when-should-you-us...


OOP does not offer true state encapsulation, it just hides it. Objects are always potentially stateful when reasoned about externally. This means that all objects in your system could potentially contribute to the global state space, resulting in a combinatorial explosion of state. This makes programs difficult to reason about and is not a good general approach.

In contrast, functional programming makes state explicit. This makes it easier to manage and control state.


It's cute how people thing that functional will fix everything.

In what way does functional make state explicit? If I look at a structure (if you have one in your language) and ask who changes this part of it across a large code base - it's much worse when you've just got a bunch of functions (with bad names) which can be hiding anywhere.

And the "reasoned about" shibboleth. What "reasoning" do you actually do.

Don't get me wrong - I have a long history in Lisp and ML and they're good languages in which good people can write good software. But no magic bullets.

In another part of the development world I've seen React programmers jumping on the functional bandwagon and deciding that all components have to be stateless. Taking the state outside the components means that we have lots of bugs with mistaken state sharing. So external state doesn't equal well-managed.


In functional programming n Haskell functions are pure so there is no state just an input and an output.

The exception is the IO entry point.

There is a lot of opportunity to sandbox things so that if you know the type of the function you know what it can or can't do to the programs state.


> not offer true state encapsulation, it just hides it.

That's kind of the definition of encapsulation.

> difficult to reason about

This is repeated as a mantra without any backup. Most people find state trivial to "reason" about. What becomes more difficult are certain types of proofs that nobody actually does.

> [FP] makes it easier to manage and control state

No, not "easier". In fact, incredibly more difficult. Just try updating a nested member. What it does is make it easier to isolate state updates, because it makes it so arduous.


> That's kind of the definition of encapsulation.

State encapsulation, at least in FP, means that the state is not visible externally. For example a pure function that uses mutable state in its implementation. Such state does not contribute to the sytem global state.

> This is repeated as a mantra without any backup.

The more global mutable state your application has, the harder it is to reason about. That should be fairly obvious I think.


> State is necessary not "evil" but needs to be managed well.

> And most OO-languages have, apart from the assignment operation, no abstractions for

Yep. "Once you're inside an object, it's essentially Pascal" -- David Robson

That's what Polymorphic Identifiers[1][2] in Objective-Smalltalk[3] address. Partial inspiration was the Web and its REST model, which shows that using state as the basis for composition is not just possible, but works really, really well at the largest possible scales.

What it does is replace your bog-standard programming language identifiers with URIs. So you can write things like the following:

     comment                    := https://news.ycombinator.com/item?id=14997098
     file:/tmp/comment          := https://news.ycombinator.com/item?id=14997098
     file:{$HOME}/comment       := https://news.ycombinator.com/item?id=14997098
     file:{$HOME)/comments/{id} := https://news.ycombinator.com/item?id={id}
But that's not the cool thing. The cool thing is that scheme handlers are completely user-defined and composable, so you can abstract over state handling, with scheme combinators that take other scheme handlers as parameters, for example a cache, or filters.

Oh, and references (ref:comment, ref:file:/tmp/comment, ...) generalize the concept of pointer. So creating a symbolic links becomes:

     file:link            := ref:file:path/to/link/target
etc.

[1] http://dl.acm.org/citation.cfm?id=2508169

[2] https://www.hpi.uni-potsdam.de/hirschfeld/publications/media...

[3] http://objective.st/URIs/


Ya. Lots of people think OO equals C++ and Java, but that is like evaluating FP on the basis of Haskell alone.


Agree that Java and C++ are poor examples. However, OOP fundamentally results in a combinatorial explosion of state, which makes programs difficult to reason about. This can be useful in some niche applications but does not make a good general approach. The best part of OOP is arguably its support for first-class modules, but this feature could in theory be added to a functional language.

EDIT: the reason why there is an explosion of state is because every object in OOP is potentially stateful. If I build a composite object Z from objects X and Y, the set of possible states Z has is a product of the set of possible states X and Y have.


OOP fundamentally results in a combinatorial explosion of state

I'd be interested in knowing what theory you are basing this on. In practice, I have never seen this combinatorial explosion of state, but maybe that's just me.


Every object in OOP is potentially stateful, some are made stateless and immutable but most are not, you need to read the docs. If I build a composite object Z from objects X and Y, the set of possible states Z has is a product of the set of possible states X and Y have. No complicated theory, just basic mathematics. I spent many years building OOP systems before moving on to FP. I strongly disagree with the claim that it is easy to manage state in OOP!


State doesn't magically disappear in FP. It just isn't given a name but it's still here, in your closures.


Functional programming languages do not capture mutable state in closures.


> Functional programming languages do not capture mutable state in closures.

Yes, they can. In ML, you can build a function that returns a set of closures that manipulate or use shared mutable state. I'm quite sure you can do something equivalent in Haskell, too.

As always, “it depends” whether you would actually want to do that.


The reference type in ML/OCaml is an opt-in feature, a container with it's own special type. In Haskell, you can also capture IORefs or similar, but any reads/writes to them are effectful and captured in the types. The point is that Python does this all the time as the default. Any captured variable could potentially change under your feet. It is rarely useful and most beginners do not expect it, hence it is commonly considered a "gotcha".


So LISP and Caml aren't functional programming language nowadays ? Because both allow you to modify what's in a closure.


Apparently we went through some magic event that only Haskell is entitled to be the one and only functional programming language.


There is no language definition or implementation called "LISP". Both Scheme and OCaml lexically capture variables by value, as if they have been passed in as arguments. This is not how it works in e.g. Python.


Python has had closures exactly the same as Scheme and OCaml since the year 2000.

What it lacks is a multiline anonymous function syntax.

See PEP 227: https://www.python.org/dev/peps/pep-0227/


No they are not exactly the same.

From the link you posted: "all uses of the name within the block are treated as references to the current block"

The Python folks call them "late binding closures" and they are frequently discussed as "gotchas".

Essentially lexically scoped variables are captured as references to mutable state. Java did not make this mistake, perhaps because they had Guy Steele (of Scheme fame) on the committee. It is a questionable default that ultimately just destroys ones ability to reason about programs.


> It is a questionable default that ultimately just destroys ones ability to reason about programs.

If that destroys one's ability to reason about programs, then one never really had the ability to reason about one's programs in the first place.

It's not like nested block scopes are some boon to program comprehensibility anyway.


Wait, isn't that exactly how lexical scoping in Scheme works?

That's how you implement "objects" using lambdas. You make closures that mutate their closed-over bindings.


The question is if you mutate the variable, create a closure, mutate the variable, create a closure, and so on -- do those closures -- when executed -- each use the value that the variable had at the time the individual closure was created, or do they all use the latest value that the variable had? Scheme (and other academic languages) do the first, Python (and other scripting languages) do the second.

But the problem with program comprehensibility comes from doing these gymnastics with block scope and anonymous functions in the first place. The right thing to do is to give things proper names and interfaces.

In other words, both what Scheme does and what Python does are bad for reasoning about programs, precisely because the semantics of what should be done in this scenario are implicit and unable to be discerned solely by reading the code. Since either behavior is non-obvious, and understanding the program relies on this implicit behavior, the practice of mutating a variable while repeatedly creating closures over it should be avoided.


No, multiple closures in Scheme will not capture different values of the same, mutated variable. For that to happen, the block has to be re-entered such that a fresh binding is created for those variables. That is not mutation.

Scheme's version of the do loop actually performs fresh binding, rather than mutation, so lambdas in the do loop capture the iterations separately.

If you mutate a variable with set!, all the closures over that variable see the change.


Thanks, "mutate" was definitely the wrong word in my post. The trouble here is that Python doesn't have syntax to do a binding rather than an assignment.


There is an ANSI standard for Common Lisp, though. And ML and OCaml have a “reference” type that was built for the explicit purpose of having mutable contents.


The ML/OCaml reference type is an opt-in feature with its own type and explicit syntax, it is not the default used pervasively everywhere. That's the key difference that makes ML programs easier to reason about.


> There is no language definition or implementation called "LISP".

Well, yes there actually is: the Lisp 1 language that MacCarthy and friends developed on the IBM 704, and its immediate follow-ups through Lisp 1.5. Since that, there isn't.


There is a basic rule: all languages carrying Lisp in their name implement much of the core of McCarthy's Lisp:

Emacs Lisp, ISLISP (which is defined by an ISO standard) Standard Lisp, Common Lisp (which has an ANSI Standard), ...

Languages in the wider Lisp family, which are more or less incompatible (Logo, Clojure, Dylan, ...) don't use Lisp in their name.


yes, there is; Common Lisp and it is an Ansi standard. This is what is commonly understood as "Lisp" (noun) as opposed as when writing "a Lisp" (adjective), in which case it could be Scheme, Racket, Clojure, etc.


> However, OOP fundamentally results in a combinatorial explosion of state

Eh, no it doesn't. I think you need to expand on that statement because it isn't remotely true in my experience.


I've now expanded on that statement. In my experience, working using OOP in large multi-million line codebases, the state issues have proven very true.


How can you know that without comparing the same programs with a FP version? What if those codebases would become messy even with FP simply because the problem they are trying to solve is difficult? Also in what way does OOP prevent you from writing the code in a FP-style? It doesn't because the concepts FP and OOP are orthogonal.


It is much harder to do functional programming in an OOP language, as one is often fighting the syntax, semantics and various defaults. For example all variables are typically by default mutable, null in every type, libraries and frameworks with pervasive mutable state etc.

Granted it has been getting easier as OOP languages have been slowly getting more and more functional features. But it really is easier to favour immutability and composition in a functional-first language.


The combinatorial explosion of state is only problematic if some parts of your program can actually "see" the global state resulting from the combinations, and thus, depend on some specific combinations in an undesirable way. In this case, you might have a design issue.

This is exactly one of the problems encapsulation and abstraction aim to solve, and has nothing to do with OO: you could also create a combinatoric explosion of possible behaviours by dynamically composing pure functions.


Any global state is potentially visible externally in terms of the behaviour of your system. Yes pure functions can be complex, but their behaviour is entirely reproducible and testable. For state not to be visible externally, i.e. true state encapsulation, then you need components that appear stateless externally, which is not how most people do OOP.


I've noticed most HN comments transformed from being discussions about the _content_ of the article to comments discussing the _topic_ of the article being posted, not sure if this is healthy, but the risk is HN becoming an echo chamber where people discuss opinions, and the posts becoming just topic triggers. We'll see how it evolves.


Actually reading the whole article takes time, while reading the title and posting a knee-jerk reaction takes only a moment. One of the curses of a hotness-oriented system like HN is that earlier comments receive the bulk of the upvotes and responses, so it becomes less interesting to comment as time goes on.

There are lots of little rules in place to minimize the damage of this (like no memes, jokes, pics, etc.; HN actively discourages clickbait), but it can't force people to actually read the articles before responding. And, one doesn't have to read the articles to vote, either (again, HN mitigates that problem somewhat by requiring a minimum level of upvotes before being able to vote, etc.).

It's hard to have a substantive conversation online when so many of the incentives are for responding quickly (and probably without going too far against popular opinion within whatever community you're in).


I wonder if there are other mitigation techniques that would be effective. For example, hiding all comments for the first N hours of a post's life, then showing all comments at once. This way, the incentive is towards writing the best comment you can, quickly _enough_.


Actually, the article makes some far-fetched assertions which may not even correspond to reality and may not even be worth discussing at all.

OO (in this case the author means C++) is less related to a programmer's understanding of reality compared to C because... programmers of the 90s make more errors that are harder to fix in C++.

Yeah. Right.


It does seem popular partly because of an anti-OO bias that bubbles under the surface here at HN.

But, it is reasonably hard data, and data has value, even if the interpretation is up for debate. I think if everyone had read the article, that'd be the more common theme of the discussion here: Are the conclusions valid based on the data seen, and either way, is it useful to know...is there something actionable in this data?


> but it can't force people to actually read the articles before responding

They could hash the title for a minimum time, or until at least three comments are posted.

That would at least force people to click the link to find out what's there, and filter out commenters too lazy to do even that.


Hashing the title when first posted would likely lead to more quality problems. The first stage of QC is New. Nothing makes it to the front page without a little time on New.

So, I think that hashing titles would make it so that the worst (by some definition, but one that probably matches HN's definition) clickbaity stuff would rise to the front page faster and more often. This would happen because people could no longer make any decisions about what to click on New by it's title, and so the short articles that confirm common tropes, the funny pictures, etc. would get people coming back faster to click the upvote, while the longer articles would be more slow to get upvotes.

Same problem we have already, only moreso. At least now, we have a stage where we can see the title in New and the interesting things might get upvotes sooner...you have a little more knowledge going in. i.e., I might not read the whole article about something I find really interesting, but I might click over to it, see that it's by someone I trust or seems to be well done and free of fluff, and upvote it before finishing reading.

I dunno. I could see blocking commenting for a time after a post. But, hashing titles seems to just take away valuable information people use to make upvote decisions during the vital New phase of a post. Also, there's no way I'm clicking on every link on New. New is a goddamned wasteland; 50+% garbage, sometimes.


Or just force the user to click the link before the comment box is enabled for that posting.


then waiting X minutes before enabling the comment box too, to avoid just clicking and posting?


You can make people answer a simple question about the content before allowing a post.

Easiest implementation is that the article poster gets to define the question and answer. In the future, an AI could take over.


one could make a sort of article captcha..


That'd be a fun AI project that would be reasonably easy with off-the-shelf components: A summarizer and test generator.

It'd probably generate a lot of false negatives, though. Even humans make bad comprehension tests sometimes.


That's true but the HN algorithm for deciding what articles make it to the front page has itself become very political.

For example, the amount of articles about programming languages like Rust or Elixir is disproportionate compared to their actual relevance in the industry.

HN has a very strong bias for functional programming.

You almost never hear about Go or Node.js anymore except when there is a major release but these languages are extremely widespread and proven and there is a lot of very interesting stuff happening in these communities that HN seems to ignore.


When Go and Node were new, they benefited from the same disproportionate level of interest here as Rust or Elixir currently do.

It's just the nature of the industry (and people in general). New and different is always more interesting than old but incrementally improved.


I would agree with Pavlov. I would also say that Rust is a much more disruptive and interesting language from a technical perspective. Node is simply another iteration on quick prototyping languages, like Ruby before it. There's not a lot of new stuff to say about its design. Projects built in node show up in the feed all the time though.


> For example, the amount of articles about programming languages like Rust or Elixir is disproportionate compared to their actual relevance in the industry.

But not to their possible relevance.


Unfortunately, the way HN sorts comments seems to encourage this -- the top comments always seem to be some of the earliest comments, and are usually posted before there is time to read the article (unless the author has read the article previously of course)


But I came here to read commentators' statements on the subject. I only clicked the paper due to your comment. The paper makes the claim, early, "[OOP's] central premise appears to be that it matches the way we think about the world."

This is the same as the title, and enough for us to discuss. I want to hear what people here have to say about the subject. That's a lot more interesting than what this 1997 paper has to say on the subject. (Of course, people can comment about that as well.)


If writing is thinking (and googling "writing is thinking" quote produces plenty of hits), then object oriented code reflects the way we think because we write it...I mean thinking in terms of object orientation is pretty a given in order to right object orientated code. That's a different proposition than object oriented programming reflects the primary way humans think or the only way humans think.

Without diving deep into the uncanny valleys of existential philosophy surrounding trying to be both observer and observed, there is a big leap from I am thinking in objects right now to Everyone always thinks in objects [or something approaching it].


> then object oriented code reflects the way we think because we write it

There are non-linguistic modes of thought that more naturally capture our intuition of evolving systems. You don't linguistically think through a baseball play in the moment. The most intuitive programming paradigms are actually event-driven [1].

To summarize, you can compute by thinking and you might be able to think by computing, but thinking is not like computing. Any particular programming paradigm is a mode of computing, not thinking. A programming paradigm that more closely aligns with thinking is more intuitive and natural, and will lead to fewer errors.

[1] http://www.cs.cmu.edu/~pane/IJHCS.html


By that standard it is impossible for any type system not to match the way we think: for example a programming language could be abstracted in terms of food, recipes, and breakfast, lunch, and dinner, and express literally everything that way.

Although by your standard our code would then match the way we think, clearly this is not a useful takeaway.


You're not refuting the point he was making, because no one is writing code in such a language.

> If writing is thinking, then object oriented code reflects the way we think because we write it

The point is the huge majority of systems (by any metric; number of engineers, lines of code, budget, ...) are written in an OO style (regardless of whether the language actually supports OO; practically all bigger C projects use OO); and the majority of people are just fine with that and get started fairly easily with it. Hence it is reasonable to assume OO being a good fit to our own thinking processes.

By comparison, very few people find learning FP easy, and most seasoned FP developers openly say that getting started and acquiring the mindset for FP is a difficult and long-winded process.

Similarly, most people find reasoning about point-free programs (concatenative languages such as Forth) quite difficult.


(Yes, I misread the line you quoted as saying something different.)

You also made your point very well, and all the facts you highlight support you and are good reason for me to doubt my conclusion regarding object oriented programming. I thought it's a terrible match for how people think about the world yet you give very strong reasons why I was wrong. (In other words you've convinced me.) Thank you.


I actually don't think any of the points you've raised are valid or empirically supported at all.

Most novices do not in fact find OO easy to learn, and most novices do not in fact find FP harder than OO to learn. What you are probably alluding to is the difficulty in learning a specific language, like Haskell. Haskell is not FP.

Also, while most software projects use some kind of OO language these days, I don't know why in the world you'd think this somehow entails it fits our thinking. By all accounts, software defect rates aren't any better under OO than under procedural language paradigms. How does that make it intuitive?

To make any kind of "reasonable" conclusion, you'd need a comparison against software written under other paradigms. Such studies have been done, and OO does not fare as well as other paradigms on many metrics.


That's not the idea I was trying to convey. I think any claim that any language (computer or ordinary or otherwise) captures the way people think independent of the use of that language is at least suspect [and for me probably false, but I could be wrong].

To me, it seems more likely that the flexibility of human cognition allows a person to think in terms of a programming language than any particular programming language expresses something important about the mechanics of human thought. Another way of putting it is that the claim that object oriented programming is ontologically different from other Turing complete languages seems less believable than ontological equivalence -- the Chicken [1] interpreter is written in JavaScript.

http://torso.me/chicken


I'm afraid we disagree entirely. Before the existence of any computer language, humans talked, and thought: at various levels of rigor. Mathematical books were published for millennia. People thought, interpreted their world, argued with each other, and so forth.

With the advent of computer languages, some of these have captured more of this independent thinking and abstraction, others capture less of it, while still others require their users to learn abstractions they were not used to.

The idea that all of this falls in the last category, with no capturing of the way "people think independent of the use of that language" is totally false to me. Sorry.


Mathematical books were published for millennia.

Writing is thinking...etc.


Can you be a bit more precise? I feel like we're talking past each other but perhaps have misunderstood each other.


Is a discussion of the discussion healthier?


meta is feta


I found this anti-oop video very persuasive. https://youtu.be/QM1iUe6IofM

He also had a couple of follow up ones.

Basically he blames java for oop's popularity. He maintains that oop's combining of methods and data is actually a fault and not a benefit. That every good oop promises apart from inheritance (which is bad anyway) is more easily achieved in procedural programming. OOP promotes endless meaningless abstractions because the problems programmers deal with are not amenable to neat abstractions. So while oop looks great when dealing with animals, cats and dogs, in real programming you end up with an explosion of difficult to name classes, the so-called 'kingdom of nouns'.


I'm sad you've been downvoted. I loved this video.


Thanks! I've always found it unnatural to design solutions in an oop way. And any time I have done it the code always seems a lot harder to debug afterwards. At the same time I kinda always felt that my preference for procedural code reflected badly on my programming skills. To hear someone who is clearly skilled and knowledgeable argue so well in support of the way i like to program was a bit like having a weight lifted off my shoulders.


Should be said in defence of C++, whether its own OO model is good enough for you or not, but object scoping and destructors alone had brought a significant paradigm shift in programming in the 1990s. Although these concepts never really made it to other mainstream languages in such a direct form, but they certainly inspired all kinds of memory management models (ARC, GC). The result is that C++ style destructors and scope management exist implicitly in pretty much every language today.

As for the OO itself, I think the problem is that any poorly written non-OO code typically goes unnoticed or just labelled "bad code", whereas poorly written OO code is labelled as such: poorly written OO code. They both are equally bad. We need to accept that if something is wrapped into a class or even some hierarchy, it is by itself not a guarantee at all that it can be useful or functional.


All these guys bashing OO and in the meantime every software stack that ended up in their comment leaving their home connection is object-oriented (even if not necessarily written in an OO-friendly language): web browser, linux, windows or mac kernel, router firmware... Seriously guys, if you think OO has failed you are oblivious to the amount of successful OO software that everyone uses every day.


Also. Every time object oriented programming gets brought up, people miss a crucial part: object oriented modelling.

Object oriented modelling means creating a model of a resistor, or a diode etc., and then using it in a different models. For simulations, such inheritance is exactly how humans think, and exactly what makes life easy.

If I'm simulating an air conditioning system, I create a class of compressor models, that correctly abstract the equations that most compressors have. I can then derive from that base class and reuse for more complicated models and eventually subsystems of such models. Each model has been debugged by itself, and its reuse is a matter of a few "connect" statements. Further, I can have classes of, for example, compenent geometry or state parameters. This makes thinking about the problem much easier.

This is exactly why Modelica is eons better than Python or Matlab/Simulink for writing good simulation code for certain cases. The "solve a problem only once" aspect is handled fantastically in object oriented modelling.


> Seriously guys, if you think OO has failed you are obnoxious to the amount of successful OO software that everyone uses every day.

Not sure what you mean. Do you mean oblivious?


Amphibious pitcher debuts in MLB.


What?


Sorry, that was a viral image this week :)

https://twitter.com/johnmclarkejr/status/74106144399966617


Damn autocomplete!


I did indeed. Sorry, not a native speaker.


In what way is the Linux kernel or my router firmware object-oriented?

Linus Torvalds has even gone on record to express his distaste for C++ and OOP.


Object-oriented design patterns in the kernel, part 1: https://lwn.net/Articles/444910/

I'd go as far as to say that every time you have a C API that looks like

    struct some_struct { ... };

    void my_api_foo(some_struct*, param1, param2);
    int my_api_bar(some_struct*);
    float my_api_baz(some_struct*, float**, int n);
it's OO. And the kernel is full of this (just like many C APIs).

Also, Linus's arguments are about C++ usage in kernel. His diving software Subsurface was ported from GTK / C to Qt / C++ for instance. (and even GTK with GObject* everywhere is fairly OO).


Passing a common struct into one or more procedures is absolutely not what is commonly understood as object-oriented programming. This is what people did (and are still doing) long before OOP existed. I also do it all the time in Haskell.

Similarly that article is wrong to imply that all these techniques are unique to OOP.


It absolutely is.

OOP languages formalized this pattern and made it easy to use.


No, it absolutely is not. OOP has lots of well-known unique characteristics such as inheritance, dynamic dispatch, polymorphism, encapsulation, etc... as well other less-frequently-noticed semantic differences like the fact that the message-passing follows a subroutine model (which may be more difficult to appreciate if that's the only thing you're used to). Yes, you can do OOP in C, but just making a struct and defining procedures to operate on it doesn't imply any OOP.


> OOP has lots of well-known unique characteristics such as inheritance, dynamic dispatch, polymorphism, encapsulation, etc...

OO just means that you have a semantic entity, the object, which contains data and to which you associate code. Your main tool to work with is this object; just like in FP your main tool to work with is the function to which you associate data through closures and currying. Likewise, in logic programming your main tool is the constraint. And these aren't relevant to the language. You can do both OO and FP in most mainstream languages.

> dynamic dispatch

is just allowing to associate a different implementation to the same function name. Every languages that have some kind of function pointer allow this.

> polymorphism

is in almost every language, including FP.

    data Tree a = Empty | Node a (Tree a) (Tree a)
is polymorphism. C++ templates are polymorphism. Rust traits are polymorphism.

> message-passing

is only relevant if you adhere to Alan Kay's vision of objects. I personally don't (even if the guy coined the term).


A lot of misunderstanding in what you wrote, but to comment on two parts:

> Every languages that have some kind of function pointer allow this.

It doesn't seem like you're reading the discussion. The claim the parent made was that "every time you do {X}, it is OOP". My reply was "yes, you can do OOP in C, but OOP implies far more than just {X} (it implies {Y}, etc.); merely {X} does not imply OOP".

You reply with "languages that have function pointers allow {Y}". Well yes, they do allow {Y}. Most/all of them in fact probably allow OOP in full. Nobody suggested such languages don't allow OOP (in fact I said the opposite about C). What does that have to do with the entire discussion and argument? Your argument isn't even wrong... it doesn't even compile.

> message-passing is only relevant if you adhere to Alan Kay's vision of objects. I personally don't (even if the guy coined the term).

This 100% completely misses the point of what I said. Replace it with "procedural call" if you're allergic to "message passing". The point I was making was we're talking about subroutine calls: you call a procedure and wait for it to produce a single value as the resulting output before proceeding. Again: if you're not used to other paradigms then that might be why you're missing my point here and adversely reacting to superficial things like the nomenclature. Whether you dress it as message passing or anything else has nothing to do with the issue.


> yes, you can do OOP in C, but OOP implies far more than just {X}

maybe it was not clear, the point I am trying to get across is that "OOP implies far more than just {X} " is incorrect; OOP doesn't imply any of inheritance, dynamic dispatch, run-time polymorphism, etc. I took function pointers as an example of why "supporting dynamic dispatch" is irrelevant for categorizing programming languages as OOP / not OOP (and I would even say that doing such a categorization is in itself a worthless idea).

> you call a procedure and wait for it to produce a single value as the resulting output before proceeding

I must admit my ignorance : which languages except prolog don't work like this ?


> I took function pointers as an example of why "supporting dynamic dispatch" is irrelevant for categorizing programming languages as OOP / not OOP.

Again, like everywhere else in your comments, you're conflating and confusing the language with the model/paradigm and keep trying to shove a language into a programming paradigm. Firstly, you can do OOP in assembly and yet it's not an "OOP language". The language is quite independent of the paradigm. Secondly, who was even trying to categorize languages here? You keep thinking the argument is about categorizing languages and then refute a nonexistent discussion...

> I must admit my ignorance : which languages except prolog don't work like this ?

Again: the discussion is not about trying to shove languages into categories; it's about programming paradigms. You can use the same language to code in multiple paradigms.

Look up "actor-oriented programming". It's a different model meant for concurrent processes, i.e. it's not a subroutine model since you don't need to wait for the procedure to finish and produce a result before continuing.

There are lot of "models of computation" out there (probably a better search term than "programming paradigm", btw) and people go so far as to do PhDs on these. I suggest looking around and not assuming everything is some trivial variation of OOP or FP or DP (declarative programming) or whatever you saw in undergrad.


This thread brings to mind an old quote:

  What is object oriented programming? My guess is
  that object oriented programming will be in
  the 1980s what structured programming was in
  the 1970s. Everyone will be in favor of it.
  Every manufacturer will promote his products
  as supporting it. Every manager will pay lip
  service to it. Every programmer will practice
  it (differently). And no one will know just
  what it is.
Tim Rentsch, "Object Oriented Programming", SIGPLAN Notes, v17, n9 (1982).


I fear this is the destiny for functional programming in the 2020s.


OOP is all about the nouns while FP/procedural is all about the verbs. That's it really: are you modeling your system in terms of objects (nouns) or behavior (verbs)? It is super easy to tell just by looking at the names in your code. Semantic features only play supporting roles, they don't define the paradigm.


that's a nice definition


OOP languages formalized the pattern of passing data structures into procedures?!


A definition of OO trivial enough to make everything OO isn't very useful. That's why inheritance, encapsulation etc. are often included, to mark the difference against what was before.


But there is an infinite spectrum of languages that mix and match various features of what you call "OO" at diverse degrees, making it meaningless to include all of these in the definition. The most famous ones tended to bundle them all which led to the misappropriation but really, there's no problem with simple concepts. We don't have to find names for everything: it's much clearer to say "language Foobar makes immutable values first-class and allows interface inheritance and delegation" than "language Foobar is FP/OO/whatever seems to be most predominant when looking at the hello world in such language".


My strategy for personal coding recently has shifted towards a sincere exploitation of automatic programming(in a manner similar to model-driven development or language-oriented programming, or the research of VPRI). The overall feedback loop looks like this:

* Write an initial mockup that exercises APIs and data paths in an imperative, mostly-straightline coding style

* Write a source code generator that reproduces part of the mockup by starting with the original code as a string and gradually reworking it into a smaller specification.

* Now maintain and extend the code generator, and write new mockup elements to develop features.

The mockup doesn't have to be 100% correct or clean, nor does the code generator have to be 100% clean itself, nor does 100% of the code have to be automated(as long as clear separation between hand-written modules and automatic ones exists), but the mockup is necessary as a skeleton to guide the initial development, and similar, comprehensible output one layer down is a design goal. Language-level macro systems are not typically sufficient for this task since they tend to obscure their resulting output, and thus become harder to debug. Languages that can deal well with strings and sum types, on the other hand, are golden as generator sources since they'll add another layer of checks.

I'm still only using this for personal code, but it's gradually becoming more "real" in my head as I pursue it: the thing that stopped me before was developing the right feedback loop, and I'm convinced that the way to go is with a pretty lean base implementation(Go is an example of how much language power I'd want to be using in the target output) and an assumption that you're building a bespoke generator for the application, that won't be used anywhere else.

Source code generation gets a bad rap because the immediate payoffs are rare, and it's easy to take an undisciplined approach that just emits unreadable boilerplate without gaining anything, but the potential benefits are huge and make me not really care about design patterns or frameworks or traditional "paradigms" anymore. Those don't achieve anywhere near the same amount of empowerment.


This is an approach I've been exploring myself. Do you have any suggestions or recommendations as to languages that lend them selves well to this approach?


> Studying the copious literature of OO, the central features which define an OO system seem a little ill-defined.

I didn't know opinion pieces could be disguised as academia. Of course the article has some interesting points about human memory, but that's why we have things like single responsibilities.


But it is not an opinion. There is no universally accepted definition of OO let alone a formal, axiomatic definition, and that makes it ill-defined.

If you disagree, point to a definition of OO that you think is "correct".


I suggest to look at Alan Kay's definition, since he's the one who invented the term "object oriented".

http://www.purl.org/stefan_ram/pub/doc_kay_oop_en


Well, Alan Kay's definition is just one definition and as evidenced by the rest of this thread, it's far from being universally accepted and definitely not the definition used in the mainstream "OO" languages.

Further, that email thread is far from being a formal definition. In that email there's a pointer to a ISO standard doc, but that's hardly authoritative either.

It seems like every language (and every programmer) has their own idea of OO, which is why I consider it quite apt to call it "ill-defined". That's not to say that OO programming itself is bad, more that the term "OO" is bad.


I'm not disputing the opinion, rather the fact that the author doesn't document why the opinion is there. Maybe the author went through research on the subject and found a consensus, but maybe not.


> that's why we have things like single responsibilities

I would argue that single responsibilities is nice to have. However, OO-style objects are responsible for both data and behaviour. Object-oriented languages themselves violate the SRP, to its own detriment.

Both functional and imperative languages tend to encourage separation of data and behaviour. I can operate on the same data structure implemented under a different context, without having to bring in its unrelated set of methods. This leads to cleaner code reuse and looser coupling - both nice properties to have in a complex codebase.


OO is just a tool, a powerful one at that. Like any tool, it can be abused, overused and incorrectly used - the classic comment that comes to mind is:

"When the tool you have in your hand is a hammer, every problem looks like a nail..."


The paper specifically calls out inheritance and polymorphism as problematic, especially in the context of maintaining existing code. Which makes me think that code visibility is at least part of the issue. We've always had functions, libraries, and "includes", but their re-use was infamously course grained, static, or not well leveraged outside their initial scope.

Things like inheritance, polymorphism, dependency injection, and "abstracting things away" in general have made it practical and common for implementation details to evaporate in such a way that we don't (and maybe can't) keep them in our heads when we are maintaining existing code.

This is great if all those class hierarchies and abstracted implementations are bug-free and work the way you anticipate them working (i.e. if they fit your mental model). As soon as they don't, or they get too complex and too deep, you're in for a world of hurt and confusion.


How to model a door.

    class Door {
      void open() {...}
      void close() {...}
    }

    door.open().
But can a door open itself? Maybe

    Man petya = ...
    petya.open(door);
But can a door be opened if it's not installed? So walls should also come into play when we think about door and its methods.

It's difficult to model the world with OOP (at least in its current state).


A `Door` is not an agent, so no, it can't open itself (unless it's a game and your doors have agency).

When you say a door can be opened, you are making the assumption that the door is installed in some sort of building. But then, the door is a component of that building, not just a stand-alone door. It can only be opened in that context.

So I would suggest something like this:

    class Door(Material, Size)
    class Room(List<RoomConnection>, Size)
    enum DoorState { Closed, Open, Locked }
    class RoomConnection(Room, Room, Optional<(Door, DoorState)>, Size)
To open the door, you don't need a method in any of these "data" classes, you just need to update the door state given there's a door in a room connection you're interested in.

The room connection may or may not have a door, and that could change over time. If it does have a door, then the door must have a state (but not all doors have a state, only the ones that are in a Room Connection, possibly others).

Notice that to update the room connection, you don't need mutable state locally... you just need a way to provide a new RoomConnection to code where opening a door may be possible (normally, you're given a RoomConnection as an input, and you might give a new one, basic FP style but OOP can also operate in that way).

So, I don't think it's that hard if you know what you're doing.

I don't want to even comment on the suggestion to have a `Man.open(Door)` method :) that's really not good.


You still follow the naive OOP thinking and it remains ridiculous.

For example, why Room?


A common sentiment expressed in threads like this is that people can write bad code in any language, therefore language doesn't matter. While that may be true, I don't think judging a language by the worse parts is a useful exercise. I'd rather judge a language by its best parts.

To that end, can someone point to me an example of object-oriented code that they feel is elegant?


At work, I am currently working a program that needs to read XML messages from a message queue, and depending on the type of message extract several pieces of data from it and write them to a specific list in a SharePoint server.

I wrote an abstract base class that contains all of the boiler plate code for dealing with SharePoint, and then one subclass for each type of message I need to deal with; those subclasses only need to override a few methods to deal with the fields from the XML message and with the specific SharePoint list they need to write to.

It is not great (the task is just too dull for that), but I am happy with the approach I have chosen because I can add support for new message types fairly easily.

That being said, I do not think OOP is the answer to all (programming) questions, nor is it the worst idea ever. Some programs can be expressed in terms of objects and inheritance very easily, just as some can be expressed in functional terms very easily.


The discrete event simulator and the circuit simulation using it in Odersky's book[1] had caught my attention as neat.

[1] http://www.artima.com/pins1ed/stateful-objects.html


I think a lot of the "standard" objects in Smalltalk-80 are very elegant. In particular, have a look at the implementations of Boolean, True, and False (it has been posted to HN before). Clever.



I think this latest resurgence of OOP skepticism could be a good thing, but I can see it easily turn into folks just using OOP as a scapegoat for bad design.

I worry as well about the push for FP to somehow 'replace' OO, when really they address different concerns, and can actually complement eachother.


You're absolutely right, just look at #select:, #collect: and #reduce: in Smalltalk for example! It's totally amazing.


But then if OO is a dud then why do we have some of its implementations C#, C++ and Java drive most of the enterprise. If they would not have existed would we have same amount to faith of enterprises to move to adopt software applications to run daily business.


OO was a fad for how many years? All these languages were strongly influenced by this, and this greatly colours the worldview programmers have who work with these languages.

If you work in Java, everything is contained within a class. You can't easily think outside the OO box because the language insists that everything you do is done in terms of OO, and this has great implications from the standard library, to primitive type boxing. It's very much constrained by the prevalent trends of the mid-90s. C++ is a more OO agnostic, it being something you can opt into as and when it makes sense.

I certainly don't think OO is a "dud", and it's provided a conceptual framework for programmers of wildly varying abilities to create and maintain vastly complicated codebases. But... it's only one way of many to structure and reason about program logic and data, and to constrain oneself to only using OO is greatly limiting. Now the hype has died down, I hope we can use OO where it fits, and avoid it where it does not, rather than as a blunt instrument for every problem?

The main problem I see in my day to day work (imaging related) is that OO is too costly. Arrays of object instances are too cache unfriendly. And code doesn't always need to be directly tied to data. I see Java code using collections of primitive arrays in place of objects because most OO languages don't provide a means of laying out objects in column stores, even though it wouldn't be technically difficult (just different offsets to members rather than packed structures). Treating objects as individual private collections of state leads to only being able to work with an object-centric rather than data-centric view, which can be quite detrimental. It's this view that leads to abominations like database object mappers, treating tabular data as object collections when they are not. While such things are possible and even popular, they often come with significant tradeoffs. I've encountered developers who are completely constrained within an OO worldview and can't take the blinkers off and see what's possible outside the box.


Prototype based inheritance (e.g. JavaScript, Lua) is a much better concept. (It's much more powerful and allows you even to implement class based inheritance.)

Also functional programming has its benefits.

The 1990s/2000s with the OO hype around C++/Java/C# was a set back.


Think of the program, not as a program, but as a family. A Family has multiple persons(threads), who can do diffrent jobs(functions). A family has a home (system) ...


2017 still debating about OO sounds like a bad idea.


Does the way we naturally think is the way to solve problems ? no. OO isn't useful.


famed Youtube programmer Brian Will also dislikes OO https://www.youtube.com/watch?v=QM1iUe6IofM&t=2096s




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: