Hacker News new | past | comments | ask | show | jobs | submit login
The Power of Interoperability: Why Objects Are Inevitable [pdf] (cmu.edu)
58 points by ingve 21 days ago | hide | past | favorite | 24 comments



This is brilliant.

While there has unquestionably been some hype about objects over the years, I have too much respect for the many brilliant developers I have met in industry to believe they have been hoodwinked, for decades now, by a fad. The question therefore arises: might there be genuine advantages of object-oriented programming that could explain its success?

A big factor is that programmers tend to be idealists. They want code that's the absolute best. {code: goodness(code)=max}. OO is a way to take pretty terrible code and go from goodness(code)=X to X+1. But it doesn't guarantee getting further. It might even make harder to go to the next iteration. That's a terrible thing for any idealist but absolutely crucial in practice. Most code is "terrible" and most code embeds a lot of institutional knowledge that prevents it from being tossed away (see the tendency for disaster from "the big rewrite").

I would contrast this with the paradigm that's been "up and coming" for decades now - functional programming. FP aims to "start at the best and stay there". The problem is that even if we assume FP can do that, it's not aiming to solve the "increment the goodness" problem, which is what a lot of ordinary programming at ordinary companies has to involve.


I'm an obsessive idealist and I blame OO programming for my unfathomable (to most people) lack of productivity. With OO I always see, for instance, that code organization B is better than code organization A for very good reason. And C is better than B. But then A is better than C, also for good reason. After that nothing gets done.

I feel like I got more done writing assembly when I was 13 than I do today.


Every time you switch, remember what that implies about your weighting of what's important. If you force yourself to develop a consistent weighting of criteria- or even just to let your weighting of criteria change in a consistent direction- then you won't cycle back to previously-discarded solutions.

So when you say that B is better than A, get specific. "If I choose B over A, I'm saying that testability and straightforward implementation are more important in the context of this project than concision. Is that true?" If it's not, keep A. If it is, remember your choice! Then when C tempts you, you can ask yourself- is switching to C consistent with the weighting of criteria that caused you to switch from A to B?

Perfect software doesn't exist. But you can try to best match your implementation to the particular concerns of the context it's written in.


objective points i hated about OOP ala Java:

- mutability is always there, you're never sure something in the object graph will bite you back

- no clear initialization, object can be "constructed" but you never know what state you get in the end, you need to read every class implicit protocol to be sure you called the right methods after `new` (FP asks for near in-order static trees.. less cute but quite obvious about dependencies). that is unless people played fully with generics and classes (phantom types to express what step you're in the object state graph)

- too much bike shedding about private / public fields

- single dispatch forcing someone to own some logic when there's zero logical reason to do so


> FP aims to "start at the best and stay there"

I'd say there's a difference between purely functional languages and functional concepts in general. Purely functional languages have indeed remained relatively unpopular (I'm not really sure why, but I'd guess it's a convex combination of "it's not how the machine works", "it's relatively esoteric unless you have studied math or cs" and "not very much backing/support"), but we've also seen functional concepts "contaminate" traditional OO languages.

Yes, I know, it's not really "functional" and diehard FP programmers will hate me, but I'd still say it's evidence that the functional paradigm can be (and in fact has been) used to "increment the goodness".


> Purely functional languages have indeed remained relatively unpopular (I'm not really sure why, but I'd guess it's a convex combination of "it's not how the machine works", "it's relatively esoteric unless you have studied math or cs" and "not very much backing/support")

My intuition on this:

Procedural/imperative thinking is more "natural" for most people. Asking someone to explain a process without any formal training in any formal language/modeling process on this (ie, a non-programmer) they'll give you a procedural/imperative description of what to do.

OOP languages are, mostly, procedural/imperative languages with extra constructs. To varying degrees these constructs are organizational as much as actually substantive additions to the execution model. For instance, classes in Java are modules for collecting related behavior and controlling access to (usually) private data fields. They can be seen as "rich" structs: data objects + procedures for altering the data.

Now, polymorphism actually adds a lot more and can be used to subsume other control flow mechanisms, I'm not saying that OOP is just procedural + organizational elements. But as a first pass, people can step into OOP by treating it as such and learn the rest as they continue.

FP languages, by contrast, do not start off as procedural + first class functions or expressive type systems. In particular, the pure FP languages drop the imperative notions (or obscure them). This creates a major hurdle for many would-be learners. Their intuition doesn't apply to these languages.

See also Prolog and relational programming for another area where people's natural tendency towards imperative thinking creates a major block on learning (or on using it to its potential).


The other thing about functional programming is it involves a sort of tradeoff that's very different from object orientation. It makes functions into "first class (language) objects". Manipulating functions directly is a very power ability. As tradeoff, making functions pure and making variable immutable limits that power and makes your operations on functions understandable as well as powerful. Because if you take functions with side effects and manipulate these in various ways, you can quickly wind-up with a powerful but incomprehensible system.

This means that "functional effects" do sometimes do best as something like subsystems called by the main system.


IIUC, this article highlights the importance of dynamic dispatch, but implicitly restricts itself to single dispatch (OOP) completely eliding any mention of multiple dispatch.

Single dispatch is great/sufficient for modeling process encapsulated by a “single agent” but multiple dispatch feels far more elegant to encode interactions between multiple agents.

Eg: Neither a.sum(b) nor b.sum(a) is as cleanly extensible as sum(a,b)


Bringing in special True Scotsman OOP systems into the paper would distract from its point. It would be confusing: is the paper saying that OOP is inevitable as it is, in the common state, or do we need to progress toward True Scotsman's OOP and then OOP will be completely unbeatable?

If regular old OOP beats everything, then True Scotsman OOP is just a footnote; oh, by the way, not only has single dispatch OOP eaten the world, but there are even better forms of OOP, which will lick the plate clean and ask for seconds. (But regular old OOP has been so successful that we have not had to massively adopt these.)

Now let's look at your sum(a, b). The paper talks specifically about service abstractions, and how OOP has specifically helped us manage them.

sum(a, b) is not a service abstraction. It's a functional abstraction! We add a and b, which could be many combinations of types (and so we need dispatch). But at the end, we will be returning a new object c, leaving a and b alone.

While multiple dispatch is all well and good, the paper's argument is, via anecdotes and hand-waving, that regular old OOP that your pointy-haired-boss is familiar with has been successful because it represents service abstractions well, in which services can be extended while remaining compatible.

Services is a code word for objects that change their state in response to requests. A.k.a. receive messages, change state and send replies.

Speaking of actual services, multiple dispatch has a poor story in distributed systems. It's difficult to send a message to three different objects in different geographical locations and get that to look like a single method.

When you have components from different vendors, multiple dispatch has gone out the window. Multiple dispatch requires classes to be completely open so that methods can peek into the innards of several classes. How will that work with server RPC API's from different vendors, COM objects from different vendors, you name it.

Multiple dispatch works well in large part because, at the bottom, a concrete piece of code gets dispatched, which has access to all the arguments and can do the work in one place, all within the same address space of one dynamic language image.

Single dispatch scales beyond process and machine boundaries. If you stick to a single dispatch design in the straw man prototype of a system, you will not have to redesign that basic aspect of the system to go to the wooden man that works across a network. Like oh shit, objects A and B of this double dispatch method are actually RPC proxies that live on different servers; now what?


The problem with oop is that there are some problems that come up a lot (type promotion for example) where there just is no good solution. You can write a program in Julia that defines 4 different types of numbers and adds them all together in about 30 lines of code (and which is extendable to adding more number types). That just doesn't work in any oop language. You would have to choose the number types you're supporting ahead of time, and would have to write literally exponentially more code to make it work.


I understand; but, to reiterate, that is not a mainstream problem involving modeling a service.

And, by the way, that specific example is doable (somewhat) in languages with static dispatch on multiple arguments, like C++'s function overloading.

We're just stuck with a statically determined result type; C++ won't statically model something like that an division of two Integer classes yields an Integer, otherwise a Ratio. If we don't need that, we are good to go:

   // no OOP dispatch to see here
   quaternion mul(const quaternion &left, double right);
   quaternion mul(double left, const quaternion &right);
Static polymorphism may be good enough for a good many use cases, and runs fast too due to all dispatch being resolved at compile time.

Multiple dispatch will not save you from the fact that if you have N different kinds of numbers, you may have to code all N*N combinations of some operations that don't commute, or N(N+1)/2 for those that do. It will give you a minimal-boilerplate way to write that though, with accurate modeling of situations where the result type depends on properties of run-time value.


any resource on multiple dispatch design ? i'm doing some common lisp and I have to admit I'm walking in the fog a bit


If you are interested in multiple dispatch, you really should check out Julia. In my opinion, the biggest reason Julia has been as successful as it has is that everything uses multiple dispatch, and the system for it is simple enough that you might not notice.


forgot julia used it, thanks


IIRC Practical Common Lisp does a pretty good job with it.

https://www.amazon.com/Practical-Common-Lisp-Peter-Seibel/dp...


The main problem with objects is the proliferation of conceptual entities that increase complexity instead of reducing it. This was in full display with the creation of design patterns, which introduced a large number of concepts meant to simplify programming, but that in fact multiplied the number of classes in an exponential way sometimes without improving understanding. Sometimes you'll have better code by just removing the crutch, but OO languages are designed to favor this kind of approach.


"design patterns" are mostly here to make up for the flaws of Java or C++. For instance, there is no need for factories classes when a language supports passing functions as arguments of other functions and closures.

I say mostly, because something like the Observer pattern is still useful even outside OO. It's also a way to communicate an intent or a role a certain class has.

I think all the SOLID/DRY/Clean code concepts were more harmful to development in general than design patterns, because these ideals became an obsession.


> For instance, there is no need for factories classes when a language supports passing functions as arguments of other functions and closures.

C++ supports passing functions as arguments and closures yet factories are often necessary, how else are you creating new objects from dynamically loaded plug-ins ?


Nice title but the article is a bit boring. I am an old programmer by now.. and i agree that objects are inevitable as are other paradigms too. Each tool has good and bad cases... Anyway, beside ADTs and other perspectives, objects are a nice way for compressing (give identity and establishing a set of good enoug number of proprieties) for complex (messy) relations and interactions. Functions are another way as also rules, constaints, etc. There is a concept called "swarm communication" that propose a deeper concept in the same direction as objects. It is possible to give identities to swarms of objects interacting in complex distributed systems. This concept is somehow also inevitable but I found it difficult to explain with words but much easier to explain to programmers showing a bit of code ;)


Curious about those swarms. What is a use example.


Examples: 1. executable chreographies implemented as messages belonging to the same identity (swarm of related messages) 2. workflows (BPM kinf of stuff when you look at them as long living processes). In this case the process dies and get revieved in time. All these instaces belonving to a single concept. 3. Even smart contracts fall under this swarm approach because the execution happens in many places but somehow it is still a single concept.

In an way, obkects are just degenrated swarms...

The whole idea is quite simple but maybe a bit too abstract... We proposed even an simple primitive called "swarm" that can be used to program. I had a old project called swarmesb (not active anymore) but this perspective got me to a lot more insights in another open source research project called privatesky... https://privatesky.xyz/overview/swarms-explained is an attempt i made to explain this way of seeing swarms.


It seems like object to swarm is like a data item is to a data array in array languages.

However there’s a problem. It’s easy enough to send a swarm a message. Not so easy to understand back the cacophony it produces.

Many actions in the world are specific instances in terms of facts, despite being part of a larger category.

How would a swarm PayPal work like? John sends 10 to Jane. That’s one to one, not swarm to swarm.

If you look how things are organized in nature, there’s always a swarm synchronizer, bringing the many to one.

It can be avoided in simple probabilistic systems, but it emerges back with complexity. I don’t talk to a swarm of cells for ex. I address the single entity defined by your central nervous system.


I see the swarm in a different way: it is a set of objects and effects happening within time but still identifiable from a single source. The synchornisation that you are talking about could be required but we can see the swarm as exploring an distributed system in time as well in space and using the external world as some sort of synchronisation support. The entities of the swarm are like messages with an simple associated behaviour. Synchronisation is complex and it is part of world exploited by swarms for a while but not really an behaviour of the swarm... Look at my examples: in choregraphies the synchronisation becomes the propagation of some effects in a distributed system, in the business process case the swarm happens in time but at a specific time you have few swarm members alive and just persisting the instance is some sort of synchronisation. In the case of smart contracts, the swarm is just at a logical level as part of consensus... May be is a bit strange to play this game but I find it usefull to imagine or even progam complex events as swarms of little entities with short lives and with fairly simple behaviours that can compose in complex behaviour of the swarm itself.


(2013)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: