
Alan Kay and OO Programming - signa11
https://ovid.github.io/articles/alan-kay-and-oo-programming.html
======
cryptica
>> Extreme late-binding is important because Kay argues that it permits you to
not commit too early to the "one true way" of solving an issue (and thus makes
it easier to change those decisions), but can also allow you to build systems
that you can change while they are still running!

>> Binding can also refer to binding a variable type to data.

As someone who has over 15 years of experience going back and forth between
statically typed and dynamically typed programming languages (but who has
settled on dynamically typed languages in the last 4 years), this statement
resonates very strongly with me. Also, it ties in perfectly with my philosophy
of focusing on integration tests instead of unit tests; the idea that you
should lock down the features of your system but keep the flexibility to move
around the all the internal implementation is critical.

It's a shame to see the new generation of developers moving back to statically
typed languages (e.g. TypeScript) instead of actually trying to understand how
to use dynamically typed languages properly. The obsession around achieving
100% unit tests coverage is equally shameful.

Many of the people who came up with or promoted the idea of dynamically typed
languages had decades of experience working with punchcards, assembly code and
statically typed languages; they were onto something. Why does the new
generation so easily discard this vast amount of wisdom?

~~~
jompe
I'm a fresh graduate from uni so I'd say that I don't have that much
experience. My last years working on the side of studies using Python really
made me prefer strongly typed languages like Haskell, Rust, Elm etc. My
experience is that the compiler almost always finds my small errors and would-
be-bugs which Python exposes at runtime (crashes with e.g. None-type errors).

What would you say the benefits are with dynamic typing?

~~~
dec0dedab0de
Python _IS_ strongly typed. ie: 1 == '1' is FALSE, and 2 + '2' raises an
exception.

~~~
AnimalMuppet
So forgive jompe for using the wrong term. The point, however, was clear. It
wasn't "strong vs weak"; the point clearly was "compile time vs run time".
And, well, Python is, from jompe's perspective, on the wrong side of the line,
no matter how "strong" Python's type system is.

~~~
dec0dedab0de
There is nothing to forgive, many people conflate dynamic typing with weak
typing, so I thought I would point it out.

Whether or not a specific method of typing is good or bad depends on the
project and the developer(s) working on it. But if you're going to use that as
a reason to make decisions, then you should understand the difference.

------
kstenerud
I'm not sure I understand why late binding and Alan Kay style messaging is
desirable.

If you send a message to something that's supposed to do a task, and it just
ignores your message because it was the wrong type or whatever, how are you
supposed to debug that, or reason about the correctness of your program at
all?

~~~
mirsadm
In my experience silent failure leads to some of most difficult to debug
systems that end up in a not quite right state. Fail as early as possible has
always worked better for me, bugs are found much earlier.

~~~
mpweiher
Things are different in-the-small vs. in-the-large.

You can't statically typecheck the internet. Oh, and no-one said the failure
needs to be silent.

~~~
dwd
Totally agree failure should be announced for another object to observe and
decide to do something about.

I didn't feel the author got the messaging part. 'You send some data to an
object' kind of misses the point.

Going back to Kay's inspiration in cellular function the messaging is more a
broadcast, though proximity matters as far as which other cell responds
(Internet routing tables are a good approximation but you need something more
dynamic to allow objects to be created and destroyed).

------
warent
I have a ton of respect for Alan Kay and think he's a genius. But why does it
seem like every time he talks about OO it's always painting an apocalyptic
picture like we're in some kind of twilight zone alternate nightmare reality
of broken patterns and models? Surely our concept of objects and OOD can't be
that bad, but his apparently contrary outlook is just so persistent...

~~~
LandR
> I have a ton of respect for Alan Kay and think he's a genius. But why does
> it seem like every time he talks about OO it's always painting an
> apocalyptic picture like we're in some kind of twilight zone alternate
> nightmare reality of broken patterns and models?

Because we are? Have you seen most OOP code-bases, they are a train wreck!

~~~
Pop2019
JavaScript is functional and oh boy!

~~~
pingucrimson
What does "functional" mean to you? If "OO" has a hundred different meanings,
then "functional" must have at least a thousand. To me, JavaScript is
definitely not functional - it lacks a notion of purity, and its meaning is
the evaluation of statements, not the construction of expressions.

~~~
Pop2019
My point was popular languages attract masses of developers and the resulting
quality of code is poor on average.

~~~
pingucrimson
That's a good point, but it's not implied by "JavaScript is functional and oh
boy!" at all.

------
mark_l_watson
Language features: it depends on what you need.

I am a huge fan of Lisp (started using professionally in 1982) and to a lessor
degree Smalltalk (I wrote a nifty little NLP library for Pharo). That said,
sometimes I really like the strict typing of Haskell, which I am using right
now to develop a commercial product.

Smalltalk, especially with the increasing good modern Pharo system, is a great
platform for some application, especially for getting close to data and
flexibility in trying UI ideas.

In any case, Alan Kay has always been an inspiration to me.

------
GorgeRonde
What is late binding of everything ?

To me it's a bit like doing the opposite of what macros in Lisp do, but to do
it during run-time rather than compile-time.

Take Ruby for instance (which is interpreted in its standard implementation).
You can define methods at runtime and 'keywords' that allow you to define new
methods are methods as well.

As a result you can extend the more "static" part of the language, adding new
ways to declare methods and other "quasi-syntactic" sugars because this static
part has been moved close to run-time and is executed like any other part of
code. There are still a few things you can't do, like building a new way do
declare classes or changing the superclass they point to.

Edit: this interpretation is driven by the notion of PHP's Late Static
Bindings[1].

[1] [https://php.net/manual/en/language.oop5.late-static-
bindings...](https://php.net/manual/en/language.oop5.late-static-bindings.php)

~~~
armitron
Rather than seeing "late binding of everything" as a specific implementation
or bits and pieces of a particular implementation, I find it is much more
clear to see it as a general self-referential idea. Why then would one _go
against it and try to constrain it_ ?

Taken in that light, as the article does a really good job of explaining, late
binding of everything is a modeling process (or a process of thinking about
things). It means that I should try and keep parts of the structure that I'm
building up as flexible as they can be so that they match the model in my
mind. One benefit of doing that is that I can later revisit and rework them.
Another benefit that is seldom talked about, is that _the object I'm working
on no longer constrains my thinking_.

In other words, only specify in detail that which is crystal-clear in my mind
model, and get away with fuzzier representations elsewhere. Tools and
languages that fall inline with that process, not only empower one to work in
this way but make it _ultra-efficient_ by triggering short feedback-loops and
allowing the programmer to mesh with the modeled object.

------
peterwwillis
Some interesting points:

\- One cell has protein molecules with 5,000 atoms each; 30% of cell is 120
million components that transmit information; About 100GB of state.

\- The internet is the only successful OO program

\- Your program should be able to change its code as it runs

My own interpretation of the "messaging" paradigm is this: you send 100 people
a letter with some random symbols in it and say, "I would like a pony." Over
time you will get letters back, and eventually you will get a response which
is basically what you were looking for. Then you send more letters.

Also I'd mention that based on the bugs in the erlang code I've seen, it
should stop being used as some kind of holy savior of the unscaling mess of OO
code out there. I don't find it any better than C code.

------
cryptos
What Alan Key describes reminds me of actor systems like Akka. Each actor is
like a "cell" isolated from other "cells". An actor receives messages of
arbitrary types ("messaging" and "late binding") and can do what it wants and
whenever it wants. Actors can form a hierarchy to handle errors. This again is
much like how cell handle injuries.

Interesting is that the Akka community put a lot of effort in implementing
statically typed actors. But as far as I know (I'm not quite up to date) there
is no final solution to it. It is controversial if it is even the right thing
to do.

------
jasode
_> , he realized that while software routinely has trouble scaling, cells can
easily coordinate and scale by a factor of over a trillion, creating some of
the most fantastically complex things in existence, capable of correcting
their own errors. By comparison, the most sophisticated computer software
programs are slow, tiny, bugfests. Kay's conception of OOP starts with a
single question: how can we get our software to match this scalability?_

Here's my pet theory on why Kay's vision for OOP didn't win in the marketplace
of ideas: The software industry achieves _" extreme late-binding"_ via
_network protocols_ like TCPIP/HTTP instead of a single programming language's
"message bus".

Instead of using the "message bus" of Smalltalk or Objective-C's
"objc_msgSend()", the world has decided to express the evolution of software
via _multiple programming languages and runtimes_ and by the mechanism of
_software updates_ instead of depending on a single language ecosystem like
Smalltalk to write a metaphoric cell biology system to evolve itself.

The majority of software we write isn't burned onto a printed circuit board
and never ever updated again. An extreme example of an algorithm that's
forever engraved in hardware is the computer on the Voyager space probe.[0]
Instead of launching a computer out to space and never touching it again, we
have the luxury of just updating the software.

If anyone remembers the 1980s online services like Prodigy and Compuserve,
they would have _scheduled maintenance outages_. E.g. they'd send out a
notification that "Prodigy will be down for service from Saturday midnight to
Sunday at 4am".

But consider today's massive web properties like Amazon.com, Facebook, Google.
They run _virtually 24 /7_ with no scheduled maintenance downtimes. How do
they do that even though we know they're constantly deploying new software,
microservices, etc -- and -- they're not using an extreme-late-binding
programming language like Smalltalk? Because they achieve that dynamism at the
http network layer instead of the programming language with devops practices
such as _" continuous deployment"_.

E.g. The url can be thought of as a "request message" in Kay's parlance.
Here's a url that uses Google Translate to convert Russian text to English:

[https://translate.google.com/?hl=en&tab=wT0#view=home&op=tra...](https://translate.google.com/?hl=en&tab=wT0#view=home&op=translate&sl=auto&tl=en&text=%D1%81%D1%87%D0%B0%D1%81%D1%82%D0%BB%D0%B8%D0%B2%D1%8B%D0%B5%20%D1%81%D0%B5%D0%BC%D1%8C%D0%B8%20%D0%B2%D1%81%D0%B5%20%D0%BF%D0%BE%D1%85%D0%BE%D0%B6%D0%B8)

Before September 2016, the Google's backend of software responding to that url
was linear algebra on a corpus of text. After that, they completely switched
out the translation engine to be a deep-learning neural network. The http
layer allowed Google to transparently change out an entire backend stack
without users being aware of it. There was no publicized "maintenance outage"
to swap out the entire language translation engine. Presumably in the future,
that same url ("message") will _extremely late bind_ to a different and better
translation engine ("receiver object").

Today, we also have constant _software auto updates_ on the desktop and
smartphone. How does Chrome/Firefox evolve new features even though they're
written in static C++ instead of dynamic Smalltalk? Because the browsers
_auto-downloads the updates_ and install them.

[0]
[https://www.allaboutcircuits.com/uploads/articles/voyager_fd...](https://www.allaboutcircuits.com/uploads/articles/voyager_fds_nasa.jpg)

~~~
carlmr
>Instead of using the "message bus" of Smalltalk or Objective-C's
"objc_msgSend()", the world has decided to express the evolution of software
via multiple programming languages and runtimes and by the mechanism of
software updates instead of depending on a single language ecosystem like
Smalltalk to write a metaphoric cell biology system to evolve itself.

I find your post very insightful. I'd just like to add that you have that kind
of message passing in Elixir and Erlang, and they also support hotswapping
code. So the idea didn't completely lose in the marketplace of ideas. It did
lose the war for the name OOP though.

I'm not a fan of what we understand as OOP today either, just looking at the
GoF book should make you think whether you want to work in the model that
makes all this necessary. So I kind of understand Alan Kay. I just think
calling Elixir an actor based functional language is just as good to me.

~~~
senderista
The GoF book was written for Smalltalk and C++, so the limitations that compel
the invention of design patterns clearly apply to Kay’s own language as well.

------
ngcc_hk
Objective c is better in this respect. It is more about message passing than
type enforcement.

~~~
jrochkind1
Ruby even more so. (Both ruby and ObjC were originally heavily influenced by
Smalltalk).

------
pier25
Great article. The idea that software should learn from billions of years of
biological evolution is super interesting.

------
barberousse
>>In other words, you don't execute code by calling it by name: you send some
data (a message) to an object and it figures out which code, if any, to
execute in response. In fact, this can improve your isolation because the
receiver is free to ignore any messages it doesn't understand. It's a paradigm
most are not familiar with, but it's powerful.

A Redux reducer consuming an Action object would possess the same properties,
no?

------
betandr
The term object-oriented has often been misinterpreted. Alan Kay explained
that "[the term object-oriented] was a bad choice because it under-emphasized
the more important idea of message sending".

------
typon
What is a modern language that comes close to embodying these ideals?

~~~
arnsholt
Smalltalk comes to mind. You might argue over whether it counts as modern or
not, but working in Smalltalk for two years was something of a revelation to
me, at least.

------
xaduha
Between elegant and 'industry-friendly' languages latter always win. Often
shortcomings of the language are accounted for in tools around it.

------
namelosw
I always wonder what exactly did he mean. I'm not quite familiar with
Smalltalk and CLOS though I have tried them. But after work with Erlang for a
while made me think I'm slowly getting what he said. I found these ideas are
really fascinating.

Try to think about the following pieces:

1\. A bee dies, the hive won't explode. Millions of cells die every minute
within you.

In Erlang, usually, processes (object) are mostly organized as a bureaucratic
structure, which has supervisors letting other objects doing the actual job.
If someone dies or fails, the supervisor could just kill and replace them.
This really looks like biological systems or human society. Not in a world
that the boss let someone do some work, then the boss and the worker along
with a whole world explodes if the worker fails (99% of OOP languages
exception handling are not OOPish at all, it's totally imperative instead of
modeling the relation between objects).

These correspond to the biological metaphor in the talk from Alan Kay in OP's
article.

2\. You can find and talk to someone alive if you know his address, email or
phone number.

Java has object reference but it's not transparent to other systems, there is
always the 'outside world' concept in this kind of reference or pointer
system, similar to ST monad in Haskell. For a normalized database, there are
transparent addresses for entity records, but they are dead and cannot talk to
anyone until every time you need it, you sort of revive then interact with it
for a short period, then you kill it and take its guts back to the database.

In Erlang, you can register any process (object) with an address like
{user,42}, any other object can talk to it if they know the address, even from
other servers. Just like how URL works mentioned in Alan Kay's talk.

3\. The world is concurrent.

You have approximately a thousand audiences in a room. In order to count them,
you let them all stand up. You tell everyone needs to get a number '1' in
their mind. Then everyone finds another person, add their '1's, one person
sits down, another person remains to stand, takes the former guy's number,
then repeat the process. The last person stands has the count.

Usually, you only need to count very few times to get the answer. And this is
what computer science is about. The problem is, that's not the way most
computer program works. Because for most programs, even if you modeled 1000
people in Java, there's still only one person doing one thing at a time.
Everyone runs on a monolithic thread. If you call libraries, you are giving
them the most important thing you have -- the thread. And they don't promise
they will return it to you.

Contrastly, in Erlang, every process (object) must have its own resource, no
one can stop other people from doing things, no one can use up all the
resources. The real world is a concurrent world where everyone is an isolated
individual who can do things at the same time.

------
nudq
[https://lobste.rs/s/8yohqt/alan_kay_oo_programming#c_5xo7on](https://lobste.rs/s/8yohqt/alan_kay_oo_programming#c_5xo7on)

>> He doesn’t have random opinions about “objects”, he invented the word

> Kay did not invent the term “object”.

~~~
Ovid
Alan Kay says he did.

"I'm sorry that I long ago coined the term "objects" for this topic because it
gets many people to focus on the lesser idea."

Source: [http://lists.squeakfoundation.org/pipermail/squeak-
dev/1998-...](http://lists.squeakfoundation.org/pipermail/squeak-
dev/1998-October/017019.html)

~~~
nudq
> Alan Kay says he did

How sad if true! I'm reserving the right to think you misunderstood what he
was trying to say there.

