
OOP: History, and challenges for the next fifty years (2013) - SiempreViernes
https://www.sciencedirect.com/science/article/pii/S0890540113000795
======
foxes
What is the natural mathematical abstraction to describe object oriented
programming? I always feel object oriented languages are some sort of poor
approximation to some sophisticated functional programming concepts (lenses,
comonads, etc), or havent realised what they are actually describing. Theres
things that in practice feel very similar and some ideas that look dual (some
sort or co algebra?).

~~~
curryhoward
There is a recent paper called "Codata in action" [1] that I think gives a
nice explanation of (part of) OOP in terms of codata types.

But that paper also finally clarified for me why OOP is so awkward and
unnatural sometimes. Why should an entire program be composed only of codata
types? One would think data types (which are traditionally seen more in
functional languages than OOP languages) would be more natural for most
problems.

The specific combination of late binding and equi-recursive codata types that
is approximated by OOP only makes sense when you're solving a problem that is
naturally expressed in terms of those features, but in my mind the majority of
problems are not.

The other major problem I have with OOP is that it's not a straightforward
manifestation of an underlying mathematical calculus, but rather a hodgepodge
of (not universally agreed upon) programming ideas that are best applied on a
more à la carte basis. These ideas should be used when appropriate rather than
taken as an indivisible paradigm to be applied wholesale to every programming
problem. That's why I prefer to think of OOP as a design pattern rather than a
programming paradigm: useful in some situations, but not to be used for every
situation.

[1] [https://www.microsoft.com/en-
us/research/uploads/prod/2020/0...](https://www.microsoft.com/en-
us/research/uploads/prod/2020/01/CoDataInAction.pdf)

~~~
pron
> OOP only makes sense when you're solving a problem that is naturally
> expressed in terms of those features, but in my mind the majority of
> problems are not.

How do you reach the conclusion that "the majority of problems are not"? But
more importantly, the right question is not about the majority of problems,
but the most common tricky parts of software, or, more precisely reduce the
effort where it is largest, and the majority of tasks does not comprise the
majority of effort.

I am skeptical about _any_ programming paradigm making a big difference,
largely because evidence suggests none so far does, or, at least, that there
are diminishing returns, but even if some programming paradigm could make a
difference, I doubt FP would be it, and not only because empirically FP so far
hasn't. The reason is that FP "helps" when the task is easy to begin with, and
once you start dealing with interaction, it "devolves" into classical
imperative programming. If any style could help, it would be a style that
specifically helps with the hard parts, not the easy parts. An example of a
more radical approach would be synchronous programming, that at least aims to
make interaction/concurrency easier.

~~~
trumpeta
I think the benefit of FP is long term. Refactoring and maintenance of systems
somebody else, not with the company anymore, wrote. 2 months into a project
you could be forgiven for thinking oop and fp are equivalent. You haven’t yet
gone through a major shift in understanding the requirements.

~~~
pron
I have been using both FP and OOP (and I started with FP before OOP) for a
quarter century already and have written extensively about the use of
mathematical formalisms in software
([https://pron.github.io/](https://pron.github.io/)). But the question isn't
about me, but about the industry as a whole. How many decades with no large
benefits do we need before we say, this isn't helping, we should look
elsewhere?

~~~
astrobe_
I am not sure that OOP "isn't helping". I am not exactly an OOP fanboy, but we
have to admit that, both inside and outside the industry (I'm thinking open
source and hobby projects), OOP has contributed something. The industry has a
lot of inertia, but it did eventually moved from assembly to high-level
procedural languages, and from high-level procedural languages to OO
languages.

However to what extent the move to OOP rather than FP (or logic programming)
for instance is accidental rather than rational is a question one may ask.
Javascript has nothing special but is wildly used just because it benefited
from its quasi-monopoly on web scripting.

I think the problem is _efficient, easy, correct: pick two_.

With "easy" meaning not only "easy to program with" but also "easy to learn"
and "easy to hire people".

Programming paradigms are false dichotomies. One can do pseudo-OOP with a
procedural language, imperative in FP, FP-style in OOP. Not to mention the
existence of multi-paradigm languages. Paradigms are merely about which
programming style a language make easier, what is supported "out of the box".

The problem is then for the user to pick the right main paradigm for the task
at hand. This is where programming becomes a craft - just like a proof
strategy in mathematics is not dictated by the formalism, but is chosen based
on intuitions (that is, expertise).

If there is no silver bullet, then maybe, from a programming language
perspective, one can have a language that let us chose different pairs in
(efficient, easy, correct) at different times. It is well-known that dynamic
scripting languages make prototyping easier, but may not be viable in terms of
efficiency and correctness for a finished product (Gradual typing tries to
reduce this gap, it seems). Automatic memory management (garbage collectors)
and more recently, languages with built-in concurrency features (instead of
direct manipulation of OS threads) makes it easier to make less incorrect
programs at the expense of some efficiency.

If we admit there is no silver bullet, then can we however to build silver
bullet factories?

~~~
pron
I agree with most of what you've said, except that programming paradigm and
languages have not helped with anything significantly beyond a certain point.
They helped early on, FORTRAN was more productive than Assembly, FP/OOP helped
after C etc., but we're seeing diminishing returns. And guess what: some
predicted that this is _exactly_ what would happen, incidentally, the very
same person who said there's no silver bullet.

I'm not saying it's not possible to help, just that in a world of diminishing
returns, it means that finding stuff that helps is very, very hard, and I
think we're at a point where we can say that FP and OOP are not "the answer"
to getting us further than where we already are, and start looking for other
things.

> If we admit there is no silver bullet, then can we however to build silver
> bullet factories?

I would say that the answer is no, for a similar reason we can't build
halting-problem-deciding factories. If we had silver-bullet factories, then
unless picking the right bullet is very hard -- in which case we've solved
nothing -- then we have a silver bullet. There are some fundamental reasons
for that, and I tried to cover some of them here:
[https://pron.github.io/posts/correctness-and-
complexity](https://pron.github.io/posts/correctness-and-complexity)

The one unanswered question is how far are we from the best we can do?

~~~
trumpeta
Wouldn't you say that the next logical step is to go up an abstraction then?
Assembly -> C -> OOP/FP -> ??? In that case, wouldn't it make more sense if
that abstraction was build on top of something which you can mathematically
reason about?

~~~
pron
First, you can mathematically reason about a program written in any language;
I do it all the time with TLA+ [1]. It is certainly not the case that FP is
more amenable to mathematical reasoning than imperative code, although the
kind of reasoning is different: FP mostly relies on equational reasoning while
imperative mostly relies on assertional reasoning. Synchronous programming,
for example, relies on temporal reasoning.

Second, mathematical reasoning has its limits, especially as we're dealing
with intractable problems. It doesn't matter if you can somewhat more easily
reason about a simple three-line program in one paradigm than in another if
most important programs are six orders of magnitude bigger. Unfortunately,
program correctness (aside from some very simple properties) does not compose
tractably (i.e. even if reasoning about components A1...An is "easy",
reasoning about the composition A1 ∘ ... ∘ An can be more difficult than any
computable function of n. So I would not put ease of mathematical reasoning
about simple cases as necessarily the main priority, let alone the only one.

[1]: [https://pron.github.io/tlaplus](https://pron.github.io/tlaplus)

------
Rochus
Interesting paper, thanks. Simula was without a doubt far ahead of its time.
Though classes as modules is essentially implemented in Java. Concerning
Smalltalk and it's features we now have another very interesting paper by
Ingalls:
[https://dl.acm.org/doi/abs/10.1145/3386335](https://dl.acm.org/doi/abs/10.1145/3386335).

~~~
Rochus
For those interested: I'm working on a Simula 67 implementation, the parser
already works: [https://github.com/rochus-
keller/Simula](https://github.com/rochus-keller/Simula).

------
pjmlp
Very interesting paper and quite a good read for anyone whose OOP
understanding boils down to whatever Java/C#/C++ do.

------
sgt101
In terms of future ideas :

Objects for multicore : I just don't buy it. The problem is that application
concerns are orthogonal to implementation concerns and the point of
programming languages is to abstract the implementation concern away from the
application programmer. The hardware engineers / os engineers should focus on
providing transparent access to system resources. Application concerns just
don't map onto resource consumption in my experience.

I found the articulation of mobility/cloud rather odd. I thought it was going
to be about lambda (AWS) and cloud runner (GCP) and other cloud serverless
offers (???) which encapsulate a collection of functionality and abstract the
concerns of providing a callable interface and resources to run the
functionality. Instead it outlined a scheme for failover and fault tolerance,
which seems to rely on centralised naming and a global OS. I can't see that OO
has much to do with serverless, and I can't see that it is really implicated
in the failover story articulated here either. I would make similar comments
about the section on reliability.

What is really strange is that the future of OO doesn't seem to be focused
(according to this paper) on the concerns of maintainability, abstraction,
readability and reuse that I think made it an attractive line for development
in the past, and this is perhaps why it hasn't persisted as "the way and the
only way" of software development into the current decade. I think that the
community has dead-ended on these challenges and is now casting about for
other stories to move forward on, which is basically the death-nell for this
line of programming languages I think. I think this is probably because
everyone knows that the four challenges I mention are extremely hard - even to
define and more so to metricate, let alone solve - and those who take them on
will be out published by folks working on virtually anything else. Also, sadly
for comp sci, these are fuzzy, soft and human problems to do with people and
organisations - and computer scientists scorn those who do not deal with
bundles of cut and dried efficiency equations and derived bounds.

How to move forward - well I do have a plan. The way forward is to infiltrate
all funding bodies and then strangle all the maths grants that are avoiding
the rigor of pure math funding by pretending to be something to do with
software development or AI. The freed up money can go to engineers solving
problems that matter and the theory folks can go and justify their work to the
funding programs that they should be justifying their work to. Until that
happens I am afraid that I see no future for OO !

~~~
andrekandre
isnt the actor model [0] by hewitt more akin to the "oo" that alan kay was
talking about?

im pretty sure its quite rigeriously defined in mathematical terms... am i
perhaps missing something?

[0]
[https://en.wikipedia.org/wiki/Actor_model_and_process_calcul...](https://en.wikipedia.org/wiki/Actor_model_and_process_calculi)

------
MichaelZuo
“There may not be any programming a thousand years from now, but Iʼm willing
to wager that some form of Dahlʼs ideas will still be familiar to programmers
in fifty years, when Simula celebrates its centenary.”

I wonder if the former is at all possible, will programming transcend to some
higher level of sophistication that obviates the need for code, or language
for that matter?

Or is there a way for code to attain the level of pure math, something
independent of any specific language?

~~~
Guthur
In my opinion mainstream software will be abstracted away to the point where
you are working on defining and refining requirements. And will likely look
very different.

Interestingly I feel software construction might become more craft man like in
some areas. And in others a little more workman like.

To be open I feel my vision is very far away, maybe 40-50 years

~~~
valenterry
I think it will be twofold.

On the one hand, there will be developers who use the most high level
languages available and will do what you said: manage requirements, complexity
and ambiguity while being able to find workarounds for the cases where
performance would not be enough (there will always be these cases).

On the other hand, there will be people who specialize in building the
foundations for the former developers. They will write high performant
algorithms and datastructures and optimize code for devices where resources
are scarce.

~~~
PaulStatezny
Isn't this already true to some extent today?

~~~
valenterry
Yeah, I think it's already starting to become visible but the specialization
will probably become stronger and more explicit.

------
lioeters
I quite enjoyed the paper and the many insights into object-oriented
programming and its history.

Perhaps ironically, the part that I'm left wondering about is that
"inheritance can be explained using fixpoints of generators of higher-order
functions". Now I'm curious to study what that means, how inheritance can be
achieved by "explicit parameterisation" ¹.

Another point that made me think is this prediction:

> In 1000 years object-oriented programming will no longer exist as we know
> it. This is either because our civilisation will have collapsed, or because
> it has not; in the latter case humans will no longer be engaged in any
> activity resembling programming, which will instead be something that
> computers do for themselves.

I wonder if it's true that in the far future, programming will be done by
computers. I suppose humans could be dictating in natural language, or moving
and shaping programs in virtual reality, but I'd still consider that
programming.

\---

¹ The paper cited is: A Denotational Semantics of Inheritance -
[http://www.cs.utexas.edu/~wcook/papers/thesis/cook89.pdf](http://www.cs.utexas.edu/~wcook/papers/thesis/cook89.pdf)

~~~
meowface
I think predicting what things will be like in the next 200 years is near-
impossible. Maybe even the next 100. For the next 1000, I think the unknown
unknowns are so numerous that people alive then would look back and just laugh
at 99.9% of predictions, like we do over most of what people predicted in the
year 1020.

Humans might have merged so thoroughly with some form of technology that it
won't make sense to talk about humanity as a concept. I think natural
language/VR programming would be considered extremely primitive then; maybe
that's something we'll see over the next 100/200 years, but I don't think the
next 1000.

If humans still exist and remain as mostly independent biological entities
(perhaps due to a deliberate decision to minimize merging for
ethical/philosophical reasons, even if it may be feasible), I think AGIs will
be doing the vast majority of all programming. Maybe there'll be a bit of
near-instant "compilation" of human thoughts into a wide variety of
applications, enabled by neural interfaces, but I think they'll mostly be toys
or just things we make for our own entertainment/amusement. (This might even
happen in this century or the early 2100s; who knows.) I think it'll be like
toddlers fingerpainting compared to what the AGIs will be concocting and
running.

Maybe at best we'll sometimes be like middle managers, giving an AGI network
extremely high-level requirements and some visual/neural examples/diagrams
which they'll work with and adjust to deliver something close to what our true
intentions were. Or maybe it won't even make sense to think of them that way,
and instead we'll just consider them like extra brain lobes we automatically
offload difficult stuff to, with bandwidth equal to or greater than our
current intra-brain bandwidth - even though they won't be physically inside of
our brains in any way.

~~~
rhn_mk1
Or we might discover that AGIs are inherently conscious beings, and that
"offloading" to them is tantamount to slavery. No one wanting to risk
competing with a superior species, humanity will be stuck with dumb computers.

Or humans would work around creating new conscious beings by building AGI
substrate on top of already living beings.

~~~
meowface
It remains unclear if high and/or general intelligence necessarily implies
consciousness. If that turns out to be the case, you'd be right, but we don't
know enough about these possibilities yet.

We will very possibly have conscious AGI, maybe even well before the first
superintelligent AGI is developed, but it may always remain a subset. And who
knows, maybe it'll be the opposite, and constructing something akin to general
superintelligence (by some subjective definition) will somehow turn out to be
much easier and will come much sooner than constructing the first conscious
entity.

We might come up with different terms to make the division very clear. Maybe
"abio people" or "digipeople" or something. They'll be a completely separate
category (kind of like animals and non-animals), and "AGI=ABP" will probably
be an important philosophical problem, someday.

My blind speculation is that they may be correlated in some ways, but are
mostly orthogonal. That is, perhaps you could have things that are conscious
but very unintelligent and ungeneralized (like some animals), as well as
things that are extremely intelligent and generalized but as conscious as an
inert chunk of rock in a canyon.

If true, in the latter case such a thing would essentially be non-conscious by
nearly all definitions. Or instead of a rock, perhaps it would be bounded by a
minimum level a bit beyond that, but only to the extent that stars or viruses
could also be said to be conscious. By the definitions where they are
conscious, it doesn't presently seem evident we should attribute moral value
to such entities. (Maybe our thinking will eventually change on that, but I
don't think it's extremely likely.)

If the minimum bound turns out to be closer to or above, say, an ant, then I
think it could get very slippery and we would have to effectively assume all
AGI-seeming entities are or may be conscious, and limit how we can interact
with them with laws and rights. Better to give the benefit of the doubt that
something is conscious than to unintentionally treat a conscious being as if
they aren't conscious. Though we can't currently seem to get that straight for
animals, so who knows how things might go for future conscious entities.

------
mikewarot
As a present day Lazarus/Free Pascal programmer... I found this dive into
history interesting. I had an almost allergic reaction to the idea of not type
checking everything at compile time, but I do see how in some cases it can
help if that is postponed.

What was REALLY interesting to me was the idea of objects as processes, and
passing messages instead of function calls. The machine I'm using to type this
has an 8 cores, and I can see that thousands are on the way eventually.
Hanging on to programming models that only execute serially will become very
time prohibitive in the near term future.

The idea of immutable variables always struck me as oxymoronic... but if an
object is always resolved by id or name each time it is used, it resolves some
rather nasty deadlocks, while allowing failure to be recovered from.

It seems obvious to me that I'm going to have to give up the ability to know
exactly how variables are stored, in order to gain this flexibility. It seems
like it will be worth the trade off, as compute cost declines, and
communication costs remain relatively constant.

There is much in this to ponder, thanks for sharing.

~~~
dunefox
> The idea of immutable variables always struck me as oxymoronic

It is, they're called constants.

~~~
willtim
That's a bad choice of name from one or more old languages that misused
established mathematical terminology (e.g. C misuses void, constant and
variable). An immutable variable (or in maths, just "variable') has a value
that is unknown at build time but is constant at runtime. A constant has a
known value at build time.

What nearly every language calls a variable today, is probably better
described as a "reference" or "assignable", but sadly that ship has sailed and
we'll forever be inconsistent with the math terminology for that one.
Immutable variable is therefore a useful synonym for "proper variable".

~~~
mikewarot
Bad choice or no, the language is appropriate... Constants should remain
constant at runtime. Variables should be allowed to vary. I don't see why
people think immutable strings is a good idea, for example. How do you ever
build a text editor, if the strings can't be edited?

~~~
willtim
The language and notation is inconsistent with hundreds of years of use in
maths and confusing for kids learning both maths and programming (at least it
confused me).

Regarding your skeptism of immutable data structures, it's best to think of
immutable data and data structures as copy-on-write instead of update-in-
place. This is a technique that is currently very underused, for example most
programming languages do not come with "persistent" copy-on-write collections.

Here is an example text editor built using immutable data structures in C++:
[https://github.com/arximboldi/ewig](https://github.com/arximboldi/ewig)

Advantages are excellent support for concurrency (background save works while
you edit) and trivial support for undo/redo.

~~~
mikewarot
I tried diving into the source code, and immediately ran into something new ::
everywhere and starting googling....

Namespaces?!? - Looks like a solution in search of a problem. The whole point
of object oriented programming was to prevent reaching inside of objects from
the outside, thus making then implementation independent from use...
Namespaces seem to break that. Like leaving the cover off a machine so that
you can push on one of the relay contacts by hand.

C, C#, C++ feel to me like Greek all sorts of keywords that make no sense, and
random !@#$%^&*():"<>?[] everywhere

Somewhere at the top of all that code, there has to at least be one actual
pointer variable where you keep track of the last revision, isn't there?

------
agumonkey
Anybody ever seen a systemic interpretation of OO (or even modularization) ?

A way to simply cut through possible spaces and divide things grossly in order
to prototype without painting yourself in a corner ? an also minimize the
amount of change if the the modules have to be changed (impact analysis comes
to mind).

~~~
CyberDildonics
I don't think any interpretation of OOP comes close to being a full solution
to software architecture. That being said I'll just copy a previous comment
that breaks down why I think 'OOP' get debated in a circle without any
resolution:

"OOP" is really three different things that people conflate together.

1\. Making data structures as tiny databases with an interface that simplifies
storing and retrieving data correctly (no transformations in the class itself
to other data types hopefully to minimize dependencies!)

2\. The school taught java/old C++ debacle of inheritance. This is where
things fall apart. Inheritance means not only dependencies but opaque
dependencies. It almost always means pointer chasing and lots of allocations.
It used to be necessary because it was the only way to get generic containers.
The speed hit to use a chain of pointers was also much smaller. In modern
times this destroys performance and is not needed due to templates.

3\. Message passing - 'at a certain scale, everything becomes a network
problem'. I think the flexibility in having this integrated into the core of a
language like smalltalk is not a good trade off, but I do think message
passing at a courser architectural level offers large benefits without
limiting the speed of more granular and fundamental data structures and
functions.

I think OOP will always be debated in a circle because people conflate these
three aspects. The first I think is extremely useful and vital. The second I
think is obsolete. The third I think should be treated as a higher level
architecture approach with large chunks of data passed around to large non-
trivial modules/libraries/nodes/plugins etc. The higher level can be more
dynamic and flexible because the overhead for a flexible structure is much
smaller if it is done with large chunks of data and not integrated into a
language and used for vectors / hash maps / one line math functions, etc.

~~~
agumonkey
Honestly the 3rd point is cute but message passing seems like level zero of
networking approach for large scale systems. I wish to find more thorough
models.

~~~
CyberDildonics
I'm not sure what you mean exactly. Once you have a system of software running
over multiple computers on a network you will have to copy data to other
computers and treat everything as not just an asynchronous node, but a
somewhat unreliable node as well.

I think this is a given, but I don't think software architecture is very well
solidified on how to scale large programs even within the same process.

~~~
agumonkey
I mean at the software level I never saw design taking into account flow of
message across the whole 'network', capacity, errors, redundancy, measure /
optimizations. At best the message passing was a decoupling analogy.

------
squid_demon
> In 1000 years object-oriented programming will no longer exist as we know
> it.

Hopefully more like 5 years or less. OOP is worthless for programming
computers and needs to step aside.

