Hacker News new | past | comments | ask | show | jobs | submit login
Objects Have Failed (2002) (dreamsongs.com)
46 points by wsc981 on March 2, 2023 | hide | past | favorite | 102 comments



I hope that, with the assistance of time, we can now adopt a more aristotlean/catholic "and" vs. a platonic/puritan "or" with respect to these various language features.

Object oriented programming is a perfectly reasonable way to organize a code base and the basics provide for good encapsulation. Elaborate object hierarchies and the design pattern maximalism of the early java community that lead to the hilariously germanic ThingDoerFactoryFacadeFactoryInstaniatorService abominations are obviously bad, but the basics of OO do a good job if used reasonably.

Increasingly we are seeing pragmatic integration of different language features: for example using classes/objects for data modeling but using functional-style closures for data structure manipulation. Pragmatic static type systems with 'escape hatches' when needed, or dynamically typed language adding support for some static typing. Etc. This is the right thing.

Not "either/or" but rather "both/and" and using the right tool for the job. The manichaeism of early language discussions can be set aside and, rather than fighting about which language is "the right thing" we can talk about which language feature(s) is/are the right thing for this particular problem.


Please, define "reasonably".

Because these people who created hilariously Finnish ThingDoerFactoryFacadeFactoryInstaniatorService also thought they do a reasonably reasonable job.

Per your comment on "use classes for data modeling" I would like to provide you with two historical facts: it took more than 25 years to get Simula (and OO) types right [1] and it took not more than 8 years for Per Martin-Loef to have intuitionistic type theory right [2].

[1] http://lucacardelli.name/Talks/2007-08-02%20An%20Accidental%...

[2] https://en.wikipedia.org/wiki/Intuitionistic_type_theory

Classes are much harder to reason about than other type systems.


Definition of a reasonable paradigm: millions of developers can successfully use it to build software systems used by billions.

Windows, iOS, Android, Photoshop, Excel, Chrome - these are complex products that use object-oriented programming. They exist and work well. They have shipped many new features over the years.


Those products don't need to be complex, at their heart they are actually rather simple. They are just written in such a way that they become unnecessarily complex, because they are poorly understood.

I would argue that the examples you give don't work well (Windows, Android, Chrome, later versions of iOS). They get worse and worse over time. Even if they start out with a good design, it turns to crap eventually.

Just because a piece of software is used by billions doesn't mean that it is well written.


If by successful you mean slow to develop, slow to run, bug ridden software we have to deal with today, you have built a great definition.

This kind of thinking got us Teams and Slack.


Millions of developers also use functional programming, for decades now. Even bigger order of magnitude of developers use relational languages to build systems of various scale. Indirectly, even more millions of developers use constraint programming of various sorts (compilers, databases and user interfaces).

Given that, the definition of yours does not provide much to discern between paradigms.

But we can compare complexity of logic of different systems and I provided two examples of that. We can even discuss some ordering between different systems.

For example, contemporary type system that is powerful enough (not even dependently typed) can model class hierarchy, it is a no big deal [1] from, at least, 2005. So classes are, in some sense, less general than System F that is used above. They also more specific, whether it is good or not.

[1] https://www.cs.tufts.edu/comp/150FP/archive/oleg-kiselyov/ov...

But in System F in [1] you can have several parallel class hierarchies, without requiring anything else from the system.

The same goes to other things like "escape hatches".

System with dynamic types cannot prevent (runtime) errors that can be prevented by a system with static types. It is just not possible to have reasonable description of a system without certain kinds of (runtime) errors without reimplementing many things from system with static type checking. On the other side, in system with static types it is quite possible to have over-encompassing type that represents all dynamic values one needs.

So, systems with static type checking are, in some sense, more general than systems with dynamic type systems. And, of course, systems with dynamic types are more specific, whether it is good or not.


> Given that, the definition of yours does not provide much to discern between paradigms.

That's the whole point. "Reasonable" is not synonymous with "optimal", so we ought to expect that there would be multiple reasonable paradigms to choose between. Which programming paradigm you choose is very rarely the largest factor in the success or failure of your product.


Exactly.

You can solve any problem by throwing more money (millions of developers). The money is the largest factor in the success or failure of the product of... yours?

When something is constantly true for all objects in the scope of the domain, it is considered tautology and should be dropped from the discussion as it adds nothing.

If reasonableness is equally meaaningless, we can drop it.

Instead, we can look at other properties of programming paradigms.


Reasonable in this context I'd understand as:

1. ThingDoerFactoryFacadeFactoryInstaniatorService trying to represent the reality of an abstraction meant to represent a reality (an indirection that isolates the domain in that level of abstraction of an abstraction).

2. ThingCreator. An abstraction named in a way that is closer to direct reality (and that can perfectly hide its own inner complexities using composition and they could be as many or more than the ridiculous example in 1).

Note that people that, conscious or unconsciously, exclude the inverted pyramid principle fall into creating 1 and people that, conscious or unconsciously, embrace it, tend to create 2 in "a reasonable" way.


“Because these people who created hilariously Finnish ThingDoerFactoryFacadeFactoryInstaniatorService also thought they do a reasonably reasonable job.”

The people who did this will mess up FP or any other paradigm too. I think OOP was a huge step forward but like with everything in software design, you have to have good taste and recognize when something you do is going off the rails. There is no paradigm that will save you from clueless people or ideologues.


>The people who did this will mess up FP or any other paradigm too.

The people who did this will not mess up FP or any other paradigm.

I have an example of that somewhere in my career, with Haskell.

But it is "he said she said" here and it is not me who started that.

>you have to have good taste and recognize when something you do is going off the rails

The Liskov substitution principle is not precisely defined, see the "shapes" problem (class Rectangle : Square or vice versa?). Thus, you have to have "a good taste," or, to be more reality grounded, you have to weigh your design choices against current use cases and possible future changes.

When you deal with constraints on a type, you don't need hierarchy at all. If you add two values of some variable type in a function, assure function's type to have Num constraints for the type of these two values. Then pass there anything that has Num implemented and be fine.

The inheritance in typical OOP program fixes solutions of historical use cases in place and prevents finding more general solutions when new use cases arise. The solutions for new uses cases most often get bolted on instead of being organically implanted.

Basically, it is OOP that provokes such concatenative names. When you step outside, you find yourself in no need of these.


Who said that you have to use inheritance for shapes? I do know that it is an often used example, but similarly to Animal/Bird it has no real world relevance.

I would even argue that inheritance is not the most important feature of OOP, but objects are. Inheritance is useful, for example GUI Nodes are famously good fit for it, but object encapsulation is the essential feature, missing that would pretty much disallow large swathes of programs from existing.


"Classes are much harder to reason about than other type systems."

Strange, I find classes intuitive. Objects lend themselves to physical system analogues; making objects easier to reason about in complex systems. The paradigm specifically helps me build light mental models which in turn helps me retain the associated codebase.

Anyone else have a similar experience?


> Anyone else have a similar experience?

Exactly contrary. I find modelling in objects classes counter-intuitive and thus not good to reason about.

When I paint a wall I apply a function (paint) to the object (wall), not the object is painting itself. I can apply the same function to a different type (aka class) of object e.g. I can paint my chair. Stays the same function.

So in my cognitive model of the world the objects are objects manipulated by me, not subjects executing "methods", but OOP is treating them as subjects.


Ok, let’s represent a double-ended stack then, as a circular buffer. It will be in the end a fixed sized array with a start and end variables pointing to one-one element of it.

Doesn’t it inherently makes sense that I want to have a public API on top of it denoting how can it be used from the outside, while a private one, that encapsulates the implementation details, and makes them only callable by other methods of this object? This is the major point of OOP, in my opinion.


To get an API I just define a type for double-ended-stacks functions manipulating it. Or if you prefer to define it with immutable variables the functions accept an double-ended-stack and return a new copied and modified stack as result.

The set of functions will be a moduls resembling what you call an API.

I am not argue this can not be done on OOP, but I find it counter-intuitive and overly complex to do so.


You can do the same thing with objects though.

I have "utilities" which can be used with objects. Per your example, if object(wall) and object(chair) both share the attribute(color) then they both can utilize function(paint). The only prerequisite would that they are paintable.


What you describe there is use of interfaces and properties.

These things are much easier to reason about and to compose than classes with inheritance.

For example, it is easy to solve Expression Problem [1] with interfaces and properties and it is hard to solve it with only classes and inheritance.

[1] https://en.wikipedia.org/wiki/Expression_problem


"These things are much easier to reason about and to compose than classes with inheritance."

Fair. Personally, that looks like an argument for objects rather than against.

Do these concepts oppose or complement OOP? Is inheritance somehow a hidden requirement?


I did not say that it is not possible to do it. I said it is counterintuitive for me to express or model it in that way.

And as you can see in the discussion arising, even this simple example is becoming a complex modelling problem in OOP.


There are countless articles showing how OOP fails to model the physical world.


Straw man.

The point of contention regards ease of reasoning not completeness or ability to model the physical world.


Define "ease of reasoning," then you can accuse others of "straw man" attack.

OOP does not even consider effects in it's reasoning. You can do anything with the state of an object and whatever that object can reach through references. The changes are not atomic, they are allowed to be (temporarily) inconsistent.

In my book, this is opposite of ease of reasoning.

You can change something that will shoot the user of your program much down the line and do that without any hiccup.


"Define "ease of reasoning," then you can accuse others of "straw man" attack."

Capable of being retained both as individual components and aggregate system.

My reply which the other user is replying to specifically quotes ease of reasoning from the parent comment. You're also conflating implementation with conceptualization.

I was only noting that OOP is easy for me. It fits my system and worldview. I did not generalize that experience to a larger population.


"Capable of being retained both..." is not a definition, it is a property, at best.


wtf?


I find classes extremely intuitive.


I find classes extremely unintuitive.

I find type classes and (generalized) algebraic data types to be extremely intuitive.


I find both intuitive and one is better for these kinds of jobs, the other for other kinds.


I have met a few people like you, and I wish you all well in your programming careers.


> Object oriented programming is a perfectly reasonable way to organize a code base and the basics provide for good encapsulation.

It should be understood as what it is - an organisational paradigm over procedural programming. The basic building block of OO programs is still the procedure with some local and global state. OO lets you bundle procedures together in groups, associate them with mutable state, and call them (statically or dynamically-bound) via the static type system.


I mostly agree (and taming mutations is likely the reason for its widespread adoption), though there is no reason why it can’t be paired with FP, or at least restrict mutations to an object’s internal private state.


I implemented a module with objects the other day.

Five or ten years ago, in the heart of my object-oriented career, this would not have been noteworthy. The last couple years, however, I've managed to enter what almost feels like a post-OO phase of development steeped in javascript (and typescript) which commonly flows between procedural and functional.

Long story short, on this particular assignment, objects made sense. They are short-lived, isolated in a module, and model the domain exceptionally well.

This feels like the right balance to me compared to the decade of horrors I was complicit in building "object-oriented codebases". OO should be one tool that is known, studied, and reached for only when needed.

I'd go so far as to say that the literature around OO might be more useful if it was written from that perspective. Perhaps too many Java consultants, thought-leaders, and shamans have led the conversation around OO since the 90's.


What people call Java OO was made mainstream via C++ UI frameworks, that is what consultants, thought-leaders, and shamans were selling before a Oak tree turned into Java.

The famous GoF is all about Smalltalk and C++.


I don't think I can agree.

OO feels like a tax, where payment is more coupling and layers of indirection in the codebase (and this is the happy case with good OO practices) but no benefits out the other side.

Stepping back, an important set of question to ask when introducing any abstraction into a codebase is "What does it get me? What does it cost? Is what it gets me worth the costs?". OO says implicitly says "objects are always worth the cost" and I just haven't found that to be true.

(keeping the metaphor going, this is obviously not so great a tax it's going to kill companies, hugely successful companies the world over pay it with out issue, but it irks me!)


I'm not sure how this can be a criticism of OO. classes are tuples with named members, methods are procedures with an implicit param. There is no OO tax. the benefits are obvious. clarity in data dependence, encapsulation, code reuse.

I dont see how OO causes coupling or indirection.


> methods are procedures with an implicit param.

Eh, not really? In single-dispatch, one of the arguments is often syntactically distinct from the others and is treated specially, but I wouldn't say that it's implicit. And with multiple dispatch object systems, all arguments are treated the same.


>more coupling and layers of indirection

actually, OO claims less coupling. as to layers of indirection, i don't think it claims anything.

> OO says implicitly says "objects are always worth the cost"

don't know where you get this "always" from - objects are sometimes, maybe even often worth "the cost" (which is small), but not "always"


In Java "public final class Math extends Object". So yeah, some parts of OO land went "always".

I think that is way to far. As far as I am concerned Objects have their uses. Files in Haskell and the sequence abstraction in Clojure look like objects in everything but name to me.


But thats irrelevant. You can't instantiate that class so what does it matter.

How should it be different?


of course it is too far, which is why other languages don't do it, though they use objects elsewhere


“ OO says implicitly says "objects are always worth the cost" and I just haven't found that to be true.”

It doesn’t always say that. That’s an interpretation only some ideologues have. Any paradigm will fall apart when overused. Software is still an art where you have to have taste to choose between several options.


Languages that shoehorn everything into objects, especially heavy weight objects (where it takes a class to create a simple function, a single variable, or single constant, ...) are not just ideologies.

They are widely used tools, and actively helpful or coercive in their enforcement of a particular view coding.


A class with just data is not an object, it's a structure.

Inheritance and encapsulation are bad.

I fail to see any pro for OOP even if I use it every day.

OOP, "clean coding", design patterns and TDD morphed most code bases into monstrosities, hard to reason about, inneficient, hard to change and slow - even if they promised the opposite.

I find good old procedural programming and functional programming more fun.


While I submitted this article, cause it raises some interesting points, I still do pretty much all of my programming in object oriented style. But for my next project I want to try the procedural approach.

With that said, I've always believed that just a single level of inheritance can be quite useful (e.g. abstract base class and below that the concrete classes). However, adding more levels of concrete classes below this can get confusing. Perhaps there should be some way in programming languages to restrict subclassing to a single level only, perhaps that could be useful.

It's kinda funny that when I started programming, so many years ago, I only did procedural programming (TI-BASIC on the TI-99/4A). But after programming object oriented for so many years, it feels kinda hard to make the switch back to procedural style.

I do find my Lua / LÖVE 2D programs become much harder to reason about as the codebase grows and I am kinda hoping adopting a procedural style can fix this issue. Though I do know part of the issue is that Lua is not static typed.


I've found this Jon Kalb talk immensely helpful - it's about applying best practices to OOP, and doesn't promote it or discourage it, just describes how to minimize excess complexity if you're going to use it. (2020)

https://youtu.be/c0lutJECNUA

He also mentions three popular alternatives: 'object-based, static polymorphism, and functional' which is an interesting breakdown. I don't quite get what the author of this 2002 piece is saying here, however:

> "And as a result we find that object-oriented languages have succumbed to static thinkers who worship perfect planning over runtime adaptability, early decisions over late ones, and the wisdom of compilers over the cleverness of failure detection and repair."

Static polymorphism bad? OOP should be more on the dynamic side?


> "And as a result we find that object-oriented languages have succumbed to static thinkers who worship perfect planning over runtime adaptability, early decisions over late ones, and the wisdom of compilers over the cleverness of failure detection and repair."

The way I read this is that you first have to think out all the objects and the way they interact with each other up front, before starting the development proces. I would imagine with a procedural style this is much less of a concern. You just have data types (adding fields as needed) and implement each procedure as needed as well, one by one. There's no interaction between objects to think about, just manipulations on some data type.

But I also think his perspective is not 100% correct. Alan Kay said the main idea about objects is that they send messages to each other (or something like that). Objective-C (like Smalltalk) have objects that act this way and this allows for very dynamic programming where at runtime decisions can still be made easily (e.g. exchange method implementations at runtime or re-route messages). I would imagine this is harder in many other languages that have objects, since they generally don't adopt the idea of sending messages. From Wikipedia [0]:

> One notable difference [with regards to C++] is that Objective-C provides runtime support for reflective features, whereas C++ adds only a small amount of runtime support to C. In Objective-C, an object can be queried about its own properties, e.g., whether it will respond to a certain message. In C++, this is not possible without the use of external libraries.

The use of reflection is part of the wider distinction between dynamic (run-time) features and static (compile-time) features of a language. Although Objective-C and C++ each employ a mix of both features, Objective-C is decidedly geared toward run-time decisions while C++ is geared toward compile-time decisions. The tension between dynamic and static programming involves many of the classic trade-offs in programming: dynamic features add flexibility, static features add speed and type checking.

---

[0]: https://en.wikipedia.org/wiki/Objective-C


not a fp vs oo thing but more generally, in fp time made proper shortcuts possible (automatic currying, compositions abstractions) while oo has zero care for syntactic ergonomy, it's one big pain point imo

the other point is the features of oo don't either help for modularity (it protects but doesn't help) nor logical mechanics, there should be a way to freely encode what is a valid object state, what transitions are allowed or not, without manually bolting a protocol on every methods (or requiring metaclasses / metaobjects)


> But with C++ and Java, the dynamic thinking fostered by object-oriented languages was nearly fatally assaulted by the theology of static thinking inherited from our mathematical heritage and the assumptions built into our views of computing by Charles Babbage whose factory-building worldview was dominated by omniscience and omnipotence.

>

> And as a result we find that object-oriented languages have succumbed to static thinkers who worship perfect planning over runtime adaptability, early decisions over late ones, and the wisdom of compilers over the cleverness of failure detection and repair.

And, there you have it. This is a big part of the reason that I never migrated from Objective-C to Swift. When I was programming in Objective-C, I always felt challenged by the idea that I had something to learn by embracing dynamic programming, and that it would be difficult, because the rest of the "OOP" world had forgotten all about it. What's left now is only a choice of straitjackets.

"OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme LateBinding of all things." – Alan Kay

I wish OOP had explicitly forked, all those years ago, and the C++/Java stuff could have been called something else. That would have at least cleared things up, conceptually.


>"OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme LateBinding of all things." – Alan Kay

This is probably why I love programming in Cuis, Pharo, and Squeak. The ability to just write my code in small chunk as I need it. Heck, the fact that block closures exist in the language spec makes it easier to even be kind of FP-like with the code as needed by just passing along function blocks where it fits best and then reverting to the OOP messaging where it's better suited.


Not to sound dismissive, but I'm curious how that's working for you? Are you still in the industry? Seems the vast majority of jobs that pay well and expect obj-c experience are in the context of "single digit percentage of our codebase is still in that and we just don't touch it."


I think what I wrote was misleading. When I wrote that I never migrated to Swift, what I meant was that I never bothered learning it. It broke my heart to have to work on iOS and throw away my Objective-C knowledge and trade (the UI stuff of ) UIKit for SwiftUI, especially as all that remained a moving target for a long time.

I just went and got a new job. I'm doing web front-end work with React and TypeScript. I'm not crazy about that, believe me.

I don't even bother with iOS stuff right now as a hobby, because I can't really stomach it. I really think Apple went the wrong way.

On a side note, I can't stand "reactive" UI's (though that's what I work with now). You know, key-value coding and notifications on iOS can be used to make a reactive UI (of sorts), instead of updating the UI imperatively. But all the books I've ever read from iOS people who knew what they were talking about always warned strongly about overusing KVC or notifications: specifically, because it makes reasoning about how code works more difficult.

So, what the hell then is the idea of a reactive UI. It's taking the kind of architecture or pattern programmers have always been warned not to use, and then turning the entire UI into that!

I am really just down on the whole thing. I think it's a mistake.


Oh man, hard agree! I love being told that's all just a tooling problem. Okay, so when is that tooling coming then? I'm going along with the crowd but it is a struggle and I wish I could stay in UIKit land until I retire. Maybe things will come full-circle by then (or I'll just be telling SiriGPT what I want to show up on the screen)


I'd say the most ridiculous thing that has occurred in this space is something I'm personally dealing with in the workspace:

Governments (even well meaning, I mostly don't have a problem with this) sometimes get a say in promulgating teaching standards in higher ed, and now, not merely "programming," but "Object-oriented programming" is a required prerequisite in my college program. Sigh.


It should be noted that this documented one of the starting positions of a panel discussion at OOPSLA 2002: Resolved: Objects have Failed.

http://www.oopsla.org/2002/fp/files/pan-1.html

http://www.oopsla.org/2002/fp/files/pan-9.html

The starting position of the "Objects have not failed" panelists is also on dreamsongs: https://www.dreamsongs.com/ObjectsHaveNotFailedNarr.html

And if you take both of these seriously, I think it becomes clear that the main failure of object-orientation is that it hasn't really been done yet. From Guy Steele's remarks:

  Procedural programming still has its place in the coding of methods.
This is true, but it is an understatement. And the crux of the problem.

The place that procedural programming has is absolutely central, and there is actually little else, because even object oriented programming is ~90% writing (attached) procedures, except we call them methods. So in a way OO wrote a check that it then didn't let us cash.

We should be able to construct structures of objects, with attributes and substructure, and then connect those objects to each other using various communication mechanisms. But we can't. Well, we can implement it, but we cannot express it.

All we can express is procedures and procedure invocations. So we can invoke procedures to create an object, and capture creating that object in a procedure. We can connect that object to another object using another procedure call, and the communication will also be mediated using procedure calls.

Where's the structure, the connecting, the communication? Hidden. Invisible in the program's source text. ("In the comments". Ha!) Just like loops and other control structures were hidden in the patterns of gotos that Dijkstra complained about.

So while OOP has delivered a lot it actually has failed to deliver on one of its core promises. Because it wasn't fully implemented.


To provide a bit of context for some of the younger readers, certain advocates of OO in the early 90's argued that it would be an end of ALL procedural code. At the time, I listened to a talk by a computing consultant who asserted that, by the year 2000, all programming would be objects with no methods. Methods were just a crutch for programmers who didn't understand proper OO methodology.

The same went for message passing, by the way. He believed that later versions of SmallTalk would eliminate message passing, as passing a message was just a procedure with extra steps.

The programmer of the 21st century would merely define a class that inherits from other classes containing the desired behaviour. For example, at the base level, a language might have a "PrintH" class that prints the letter H where instantiated (with similar classes for other classes). You might then simply write your program

    class Main : inherits PrintH, PrintELower, PrintLLower, PrintLLower, PrintOLower, PrintComma, PrintSpace, PrintW, PrintOLower, PrintRLower, PrintLLower, PrintDLower
Now, the above code is obviously tedious, but most programming would be done via higher level libraries. The important part was that a class was purely defined by what it inherits - the language would have no operation beyond inheritance. This was the only way to ensure that all code would be reusable.


Oh boy!

Good story. And of course he just reinvents a procedure with all this.

There are things that are procedural, they should probably be expressed as procedures.

The problem is that there are many things we want to express that are not really naturally procedural, yet we have to express them procedurally as well.

2nd order problem is that when there is another mechanism, people then want to use that as the "this can do everything"-mechanism. Yes, you're clever, now sit down. I have seen fibonacci expressed as dataflow more times than I care to remember. It's not a pretty sight.

But dataflow expressed procedurally also not so cool.


None of that seems at-all like the discussions I recollect from those decades.

Who were those "certain advocates"?


yes, well, that illustrates the quality of many "consultants"


We have those things, we call them spreadsheet cells.

A spreadsheet cell is an object in that it has a state tied to some procedural code that computes the state, possibly in reference to other cells, possibly just in that the identity function is, indeed, a function. The point is, the vast majority of the procedural stuff, the state updates and the method invocations, are implicit, just as the gotos become implicit when you use while loops.

Using objects to represent real-world entities was always a somewhat loopy idea, and one that obscured the real power of OO as it is now: The ability to abstract over kinds of behavior using inheritance or mix-ins. "This type has the Ordinal behavior set" or "This function can accept any value of any type with the Printable behavior set" is a powerful leg-up for procedural programming and shouldn't be ignored, but it is, fundamentally, an improvement to procedural programming, and not a wholly new way to think about code.


NetKernel seemed to have gotten close to this state, it's too bad it stayed commercial and now seems dead.

https://www.youtube.com/watch?v=1O8PwkXfDJg


A distinction without a difference; what is the difference between failure of the paradigm and failure to implement the paradigm? If it hasn’t been implemented after 4-5 decades then maybe the idea just isn’t workable.


"Heavier than air flying machines are impossible" -- Lord Kelvin, 1895


And if, in 1895, you said "flying machines have failed" that statement would be 100% accurate. And if you suggested that a business should be built on the failed flying machines, you'd be laughed out of the room.


Which is why, in 2023, I wrote:

"So while OOP has delivered a lot it actually has failed to deliver on one of its core promises."

And of course Lord Kelvin, in 1985, didn't say "Flying machines have failed" he said "Flying machines are impossible". And you, in 2023:

> If it hasn’t been implemented after 4-5 decades then maybe the idea just isn’t workable.

Maybe. But also maybe not. First of all, it's not exactly like OO hasn't achieved anything in 4-5 decades. Nothing could be further from the truth.

Just look around you! Or maybe let's see what Fred Brooks of "No Silver Bullet" fame had to say:

"A burst of papers came and said “Oh yes, there is a silver bullet, and here it is, this is mine!” When ‘96 came and I was writing the new edition of The Mythical Man-Month and did a chapter on that, there was not yet a 10 fold improvement in productivity in software engineering. I would question whether there has been any single technique that has done so since. If so, I would guess it is object oriented programming, as a matter of fact."

https://www.infoq.com/articles/No-Silver-Bullet-Summary/

So for Fred Brooks, the one thing closest to an actual Silver Bullet according to his definition in software engineering was, drumroll, object oriented programming.

Pretty impressive for an "unworkable" idea.

Secondly, it has managed to achieve all of that while not even being fully developed yet! And I am not the only one saying that.

Alan’s famous quip “I made up the term ‘object-oriented’, and I can tell you I did not have C++ in mind” was followed immediately with the far less quoted “The important thing here is I have many of the same feelings about Smalltalk”.

https://www.youtube.com/watch?v=oKg1hTOQXoY&t=633s

That was 1997, and arguably we haven't really made progress in that area since then, and even regressed. Sad.


this is a famous quote that illustrates that this very famous scientist must have been mentally impaired at the time. did he look around? birds? paper planes? thrown beermats? kites?


machines


well, a bird is a machine, as are we. a modern glider is a machine (and Lilienthal had flown prior to the quote).

you need to specify what you think or don't think is a machine. as kelvin should have done.


> bird

"The term is commonly applied to artificial devices,.."

It seems he figured out that his audience was smart enough to figure out he wasn't deviating from common usage. And was mostly right.

> Lilienthal

Unpowered.

"A machine is a physical system using power to apply forces and control movement to perform an action."

https://en.wikipedia.org/wiki/Machine

Cheers.


also from same paragraph:

"but also to natural biological macromolecules, such as molecular machines. Machines can be driven by animals and people, by natural forces such as wind and water, and by chemical, thermal, or electrical power"


commonly

Bye now.


> Hidden. Invisible in the program's source text

where else should it be?


Actually in the source text.

It currently isn't.


how so? please provide an example.


See: https://2020.programming-conference.org/details/salon-2020-p...

Very briefly: what we currently have is procedures that implement these things, but that's not the same thing.

Just like having a loop implemented by gotos is not the same thing as having a loop. (See Dijkstra)


Here we go again.

I will note that this is from 2002. But the fact that it is being recirculated now suggests that this is a timely topic.

I've been a programmer for 30 some odd years. 25 of those professionally. I started on BASIC, learned C as fast as I could because I wanted to program games and all the cool kids were writing games in C. Then OOP became hot and I learned C++ and Java and was introduced to OOP design patterns and threw myself into studying Gang of Four, PLOPD and other classic works on the subject.

Yes, I remember the "hot-ness" of OOP in the 90s. Next, Java, even the marketing forces that led to calling ECMAScript "Java"Script and shoe-horning objects into what was intended by Brenden Eich to be a functional language came out of this idea that OOP was the next big thing in software development.

But what I don't remember was anyone, at any point in my 25 year career history, saying that OOP is the "only way."

In fact, I remember having a keen interest in LISP and FP as an aspiring novice in the 90s. I wanted to learn everything about languages, language theory and computer science that I could. People encouraged me to pursue that and never once said "stick with OOP and forget other paradigms."

But what I do see today, are people being so trendy and fashionable that they ARE saying these things about Functional Programming. OOP is being thrown with the bathwater HARD while people claim that it has "failed" and that everything should be Functional.

Both paradigms solve problems in different ways, have their own strengths, weaknesses and value judgments. OOP values custom types and encapsulating state whereas FP favours primitives, immutability and statelessnes. OOP sees conditional logic and the root of all complexity whereas FP views state as the root of all complexity. Neither is right or wrong and both points of view are provide better tradeoffs in different situations.

In other words, do your job as an engineer and pick the right tool for the problem at hand. Stop pretending that we are going to achieve some utopian panacea where we finally resolve the Church / Turing hypothesis and can say, objectively and absolutely, that one approach is always better in every situation.


If your first language is Java, like mine was, then you're not even taught that there is anything _but_ OOP. It seems to me that OOP is foisted onto newbies who don't know anything else exists (why would they?), by teachers/instructors/programmers who love OOP.


the teachers probably know nothing else either.

having said that, i started out with procedural programming in basic, fortran, assembler and c, but when c++ came along i saw immediately that classes were a (note "a", not "the") very good way or organising code for larger projects, and i still think they are. but when i was exposed to java my very first thought was "but i don't want everything to be a class!"


one of the big problems about deciding what is the "best" approach is that there is no way of doing this objectively, scientifically or logistically. who is going to pay to have multiple teams implement a complex solution using multiple approaches? and even if someone were mad enough to do this, it would prove nothing, as the teams would be made up of different people.


The solution to this would be languages that made different paradigms easy to use, and easy to use together.

I.e. the problem you are pointing out is currently using different paradigms together means bolting disparate languages and tools together, and all the accidental complexity, glue code and cross-tool knowledge that demands.

Which is a significant challenge, indeed.


well, no, i don't think that is what i am pointing out. and of course some languages (for example c++) are explicitly multiparadigm.


Ah, yes I see that. C++ and even C.

I think Lisp - aside from not being as low level - also is fantastic for easily creating different coding paradigms, with the advantage that code is naturally directly manipulatable.

But C or C++ can easily recreate Lisp type computation as well.


"java trying to force its world view" is the correct way to design a programming language, just java is a bad language, and only half way forces its world view when it has to be 100% for this idea to work. in the sense that there shouldn't be things like int32 in a high level language.

also the OOP paradigm makes no sense and is just pseudoscience. i can't even think of a valid reason to use objects once you get past the cat/dog/animal example which only makes no sense because the proposed problem has no concrete requirements. there are no small examples where objects make sense, and for any big example it would require days of analysis to reach a conclusion. there are lots of invalid reasons, like some syntactic convenience of x.fff().zzz(), this example being invalid because that syntactic benefit can be better achieved by a composition operator like in bash: x | fff | zzz

if i look at any OOP codebase the first thing ill see is someone pointlessly loads up some data into an object and calls obj.method() which then just unloads the data as if they were just parameters to that function. why did you even make the object in the first place? because OOP braindamage, simple. you've been trained that this is somehow the ethically correct way to do things.

the idea that objects provide some new type of abstraction is just nonsense. those constructs were already doable in ML as well as by convention in C or any imperative language, which is how most such languages are in fact written. all OO adds is interfaces which are done better by many other languages, and inheritance which is just a joke.


> if i look at any OOP codebase the first thing ill see is someone pointlessly loads up some data into an object and calls obj.method() which then just unloads the data as if they were just parameters to that function.

nope, never seen that.

it seems you have only ever looked at bad code. or maybe only ever written it?


> i can't even think of a valid reason to use objects once you get past the cat/dog/animal example

Because you have internalized the braindamage you claim others have. If you think "an object in code corresponds to an object in the real world" then you are victim of the object=noun method=verb braindamage.

An object is just a tuple. A method is just a procedure with an implicit param. OO is a way to factor code into pieces and to name and reuse those pieces. inheritance is for liskov substitution.


inheritance is NOT for liskov substitution, and not even OOP fans will argue that as most even consider inheritance a mistake. inheritance is specifically to make a subclass with the same methods and allow overriding a select few. it's basically just a syntactic shortcut. you get the added bonus that it of course still satisfies the interface of the parent, which is the semantic part, but this is a vanishingly small benefit


Related ongoing thread:

Objects Have Not Failed (2002) - https://news.ycombinator.com/item?id=34996807 - March 2023 (3 comments)


Just slightly tangential, but the quality of writing and thought from this person, whom I've read quite a bit at this point, just towers above the rest. Where did all the developers like this go? People who can talk seriously about fundamental things almost totally unattached from the business world or hype cycle. It's the kind of dev I imagined was everywhere before greeting reality. More a Stanley Cavell than a Steve Jobs. Its like a little portal into a world where the humanities are (correctly) taken seriously and actually used in a field that needs it most of all.

Why are the good thoughtful writers like this? Why are the ones who do exist always lisp guys?


My guess is that the older folks who have the experience to speak deeply about these topics aren't interested in maintaining a blog. One of my most interesting college professors worked on chess engines professionally. He has a small online presence, but his commentary is usually in response to pointed questions or wild claims about chess engines.

The guy has stories upon stories about both hardware and software design and their impact on performance at both high and low level. All of his tangents in class were downright fascinating.


My hunch is that they are older ('Lisp guys') and, as more devs come online, are a smaller minority with each year that passes.


rpg is a gem, I highly recommend his book Patterns of Software for both philosophizing about software and as a memoir of his career.


(2002)

The worm is probably about to begin turning; the strict typing and bondage and discipline fads will fade and we'll have self modifying code seen as a cool thing again. One might argue that the "API / SaaS" fad is the 2nd coming of Objects, we just misinterpreted where the independent black boxes fit in the overall architecture.


> bondage and discipline

The borrow checker in the dungeon. Like it. Take these lashings of stack trace.

> we just misinterpreted where the independent black boxes fit

Yep. Some folks who had worked in Sun may have come to that realization at this point. Processes as objects.


I'm interested in object orientation from the perspective that an object is a process and it has behaviours with other objects and internal mechanisms.

Similar to message passing. Unfortunately I rarely see interaction diagrams of compositional nature of objects, so it's not obvious how most software is meant to work without an example sample.

Am I alone that Javadocs or Doxygen docs are not enough to understand an object orientated product?

I think referring to "local understanding" of objects is a really good point. The more you can understand locally, the more you can actively include in your thinking.


I believe objects have failed because you want to compose types (data) and functions in a different way. So any form of encapsulation (not to be confused with modularity) is too limiting.


OOP is still a good model for systems where most objects are long-lived, local, mutation latency is low, and the number of objects is reasonable. Like GUIs and simulations. (Not coincidentally those are the domains where OO was introduced.)

It’s not a good fit for systems that respond to short-lived requests. If your objects’ lifetimes are never longer than a HTTP request, you shouldn’t be using OOP. It’s not the right model! (I call this “kamikaze OOP”: you build a complex piece of machinery in memory and crash it down the next instant.)

At the other end of the spectrum, if your real-time system deals with potentially huge amounts of objects, then OO isn’t a good fit either because it doesn’t easily let you optimize memory layouts so you could loop over all that data. Something like ECS (entity component system) is probably a better fit.


I fail to see what does OOP, a paradigm, has to do with short-lived requests’ resource requirements or ECS. There is no reason why both couldn’t be handled by an OOP programing language. These are just implementation details.


Sure, it's obvious when you look at the code. There's hardly any "objects" per se, but there's tons of classes and class definitions and boilerplate regarding classes.

What we generally call "object-oriented" is actually "class-oriented".


20 years later objects are alive and well though


This article gets reposted about every year and every year, it's demonstrated to be completely wrong :-)


Whatever Tesla's teams in sw/hw/fsd are doing appears to be state of the art at industry scale, improving year on year. Certainly, more sophisticated than F-35 sw/hw vtol control.


Sarcasm?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: