
Brad Cox: when OOP was about “Software-ICs” and micro-transactions - kemenaran
https://deprogrammaticaipsum.com/2019/12/02/brad-cox/
======
discreteevent
He references the Byte magazine Smalltalk issue. There was another issue much
later that looked at why objects failed but actually they didn't. They
succeeded but they weren't called objects. They were called Visual Basic
controls/components. These were a runaway success. VB despite it's limitations
enabled a massive amount of software to be written quickly and easily mainly
through re-using commercial components.

But the components were completely encapsulated. Black box, no source code, no
inheritance. You could argue the same for services today. Services are
software ICs, you can call them objects it whatever you like but they achieve
Brad Cox's objective, which is all that matters. (It's just a pity that people
think that a service/component has to run in a separate process always, unlike
VB)

~~~
r00fus
I think the concept of software "parts" (similar to the auto industry - many
of the components are made by 3rd parties) is only tangentially related to
object oriented programming.

There were always APIs but they're still parts with an exposed interface - now
just distributed over the Internet "as a service".

~~~
imglorp
I think we have succeeded at software IC's beyond our wildest dreams at the
time: standard, third party components with mostly understood behaviors,
documents, and catalogs to obtain them.

Almost everything except C has web loadable modules and online docs (analogous
to IC data sheets in retrospect!) at this point, whether it's Perl's CPAN
(going back to 1995) or Gems or MVN or whatever through obvious L-Pad
disasters and now containers and various Hubs.

edit. What we didn't dream of when the Byte Smalltalk issue came out (should
NOT have tossed mine :-) was the implications of networked module repos. Add
one or two lines of code and the system goes out and gets a standard module
and wires it in for you. This was not on the menu for people shopping for 74xx
at Radio Shack and Digikey.

------
ChrisSD
It's a shame that, in the minds of many, OOP has come to mean only:

* chuck everything vaguely related in a class

* share code by using inheritance

IMHO, the way Java and C++ have been taught (in practice, if not by teachers)
has done a disservice to OOP. So much so that many programmers now don't
consider languages to be OOP unless they have the class keyword and
inheritance.

~~~
MaxBarraclough
> many programmers now don't consider languages to be OOP unless they have the
> class keyword and inheritance.

If a language doesn't support inheritance, in what sense is it an object-
oriented programming language?

It's true that you can use OOP principles in languages that don't have
language-level support for OOP. The canonical example of this is the GObject
library for C. That doesn't tell us how we should use the term _object-
oriented programming language_ though. I don't see why we'd use a more
expansive definition.

~~~
int_19h
In every sense. Prototype-based OO has been a thing for a long time now.

If you look at the entire breadth of OO languages and see what they have in
common, there's really only two things that are truly common: object identity,
and some form of dynamic dispatch based on that. Everything else -
inheritance, encapsulation etc - is optional.

~~~
MaxBarraclough
Prototype-based OO is a form of inheritance.

> there's really only two things that are truly common: object identity, and
> some form of dynamic dispatch based on that. Everything else - inheritance,
> encapsulation etc - is optional.

That's a broader definition than I had in mind, but I can see where you're
coming from.

------
ilovecaching
If you want to really see OOP in action study Erlang. As the late great Joe
Armstrong says in his introductory book, Erlang is a real OOP system - objects
can only communicate through message passing. There is no notion of
visibility, friendship, inheritance, etc. Objects can hold state and pass
messages and that's it - which coincidentally makes concurrency dead simple.

What most people think of when they think OOP is C++, Java, and Python which
got OOP completely wrong and ruined our perception of OOP. They also tried to
use their misshapen OOP hammer on every problem they could find to the point
that it’s an overused meme. These days you see languages actively trying to
distance themselves from OOP as a form of enticement (Rust and Go being the
two prime examples).

~~~
xscott
> These days you see languages actively trying to distance themselves from OOP
> as a form of enticement (Rust and Go being the two prime examples).

They definitely distance themselves from inheritance. However both Rust traits
and Go interfaces look to me like they're there to appease people who like the
.method() calling syntax. (I know there's more to the story than that,
particularly for Rust and its type system...) I think either could've gone
with multimethods or overloaded functions and have been better for it, but a
lot of people seem to really like the object.method() look.

~~~
cgrealy
> a lot of people seem to really like the object.method() look.

There’s a pretty simple reason for that: autocomplete.

In most halfway decent dev environments, typing “object.” will present you
with a list of possible operations on that object. OTOH, if you want to do
“method(object)” you need to know the method name (in all current scopes,
including globals).

I’m not saying this is the only or even best way to write code, but it’s
definitely a factor IMHO.

~~~
xscott
You're almost certainly right, this is pretty compelling. It's not the way I
work, but I'm sure lots of people really like their IDEs. Despite that, I
recently changed one of my APIs from being:

    
    
        result foo(bar b, other stuff)
        result foo(baz b, other stuff)
        result foo(bum b, other stuff)
    

To:

    
    
        result r = bar.foo(other stuff)
        result r = baz.foo(other stuff)
        result r = bum.foo(other stuff)
    

Not because of the IDE, but because the compiler error messages are horrible
for the ones above when you make a mistake. If you pass the type as the first
argument, the compiler "helpfully" tells you 3 pages of information about all
of the overloads. However, if you use method syntax, it only tells you
overloads for the one object.

It's a little frustrating that the hammer is changing the shape of our hands,
and not vice versa.

~~~
DonHopkins
The world has been changing the shapes of our hands since they were fins.

The problem I have with most language's function call syntax (except for
point-free stack based languages like FORTH and PostScript) is that you can
have multiple fingers on your "in" hand, but only one finger on your "out"
hand. C#'s in/out/ref modifiers and Lisp's multiple-value-bind are hacks.

[https://en.wikipedia.org/wiki/Tacit_programming](https://en.wikipedia.org/wiki/Tacit_programming)

FORTH's /MOD ( numerator denominator -- remainder quotient ) naturally takes
two integer inputs and returns two integer outputs, and it doesn't use need
any special clumsy syntax to express that.

~~~
dragonwriter
> The problem I have with most language's function call syntax (except for
> point-free stack based languages like FORTH and PostScript) is that you can
> have multiple fingers on your "in" hand, but only one finger on your "out"
> hand.

I dunno, in many modern languages that aren't point-free stack-based languages
you can (and the difference between these is one of perspective more than
concrete substance, arguably) either have multiple “fingers” on either hand or
only one on each, but the one value each touches can be arbitrarily structured
and destructured.

~~~
DonHopkins
That same logic can be used to argue that functions should only support one
input argument, too. If you can have multiple inputs, then what's wrong with
multiple outputs? And if multiple outputs are so bad, then why have multiple
inputs?

There's a big difference between simply and efficiently returning multiple
values on the stack without generating intermediate garbage and memory
references, and packing multiple values up into a single tuple, polymorphic
array, structure, or class, and then destructuring it later, or passing input
parameters that are indirect pointers to temporary output locations in linear
memory.

Using indirect pointers for output parameters can also cause bugs and
performance optimization problems with aliasing.

[https://en.wikipedia.org/wiki/Pointer_aliasing](https://en.wikipedia.org/wiki/Pointer_aliasing)

To elaborate what I said: C#'s in/out/ref and Lisp's multiple-value-bind
syntax are clumsy, inelegant, and inefficient hacks.

And languages like Java that don't have pointers can't even do that, and just
have to proliferate pointless container classes and generate garbage
intermediate objects. Quick: How do you implement swap(a, b) in Java? (Without
using an IntermediatingObjectSwapperDependencyInjector-
ProxyThunkingShadowEnumeratorComponentBeanAdaptor-
DecoratorReferenceRepositoryServiceProviderFactoryFactory!)

[http://www.javapractices.com/topic/TopicAction.do?Id=37](http://www.javapractices.com/topic/TopicAction.do?Id=37)

[https://stackoverflow.com/questions/1403921/output-
parameter...](https://stackoverflow.com/questions/1403921/output-parameters-
in-java)

[https://stackoverflow.com/questions/3624525/how-to-write-
a-b...](https://stackoverflow.com/questions/3624525/how-to-write-a-basic-swap-
function-in-java)

With WebAssembly, for example, returning multiple values has a significant
effect on performance and code size, because it's much more costly to return
multiple values indirectly through linear memory than on the stack. (The stack
is in a separate address space that you can't point to like linear memory.)

That's why there's an active multiple value return proposal for WebAssembly,
which is implemented in Chromium release 80:

[https://www.chromestatus.com/feature/5192420329259008](https://www.chromestatus.com/feature/5192420329259008)

[https://hacks.mozilla.org/2019/11/multi-value-all-the-
wasm/](https://hacks.mozilla.org/2019/11/multi-value-all-the-wasm/)

>But Why Should I Care?

>Code Size

>There are a few scenarios where compilers are forced to jump through hoops
when producing multiple stack values for core Wasm. Workarounds include
introducing temporary local variables, and using local.get and local.set
instructions, because the arity restrictions on blocks mean that the values
cannot be left on the stack.

>Consider a scenario where we are computing two stack values: the pointer to a
string in linear memory, and its length. Furthermore, imagine we are choosing
between two different strings (which therefore have different pointer-and-
length pairs) based on some condition. But whichever string we choose, we’re
going to process the string in the same fashion, so we just want to push the
pointer-and-length pair for our chosen string onto the stack, and control flow
can join afterwards. [...]

>This encoding is also compact: only sixteen bytes!

>When we’re targeting core Wasm, and multi-value isn’t available, we’re forced
to pursue alternative, more convoluted forms. We can smuggle the stack values
out of each if and else arm via temporary local values: [...]

>This encoding requires 30 bytes, an overhead of fourteen bytes more than the
ideal multi-value version. And if we were computing three values instead of
two, there would be even more overhead, and the same is true for four values,
etc… The additional overhead is proportional to how many values we’re
producing in the if and else arms. [...]

>Returning Small Structs More Efficiently

>Returning multiple values from functions will allow us to more efficiently
return small structures like Rust’s Results. Without multi-value returns,
these relatively small structs that still don’t fit in a single Wasm value
type get placed in linear memory temporarily. With multi-value returns, the
values don’t escape to linear memory, and instead stay on the stack. This can
be more efficient, since Wasm stack values are generally more amenable to
optimization than loads and stores from linear memory.

~~~
dragonwriter
> That same logic can be used to argue that functions should only support one
> input argument, too.

Some languages do (more if you consider things that support what looks like
multiple arguments but which the complete set of arguments that can be passed
to a function corresponds directly to a single data structure in the
language.)

> There's a big difference between simply and efficiently returning multiple
> values on the stack without generating intermediate garbage and memory
> references, and packing multiple values up into a single tuple

Fundamentally, there's not, since you have to represent both the number of
items and each item either way. It's true that there are more and less
efficient means of performing the task, but the information required is
identical, so it is quite possible for any method capable of doing what looks
like one to a user of the language to implement what looks like the other from
the same perspective.

The obvious implementation when conceptualized each way given other elements
of a language or it's implementation design may be different, but that's not
an inherent difference, and implementing efficiencies in the implementation
has no necessary reflection in language-level features (and supporting any
particular language-level feature isn't a guarantee of efficient
implementation.)

------
travisgriggs
Veteran Smalltalker-turned-modern-iot-embedded-polyglot here.

OOP is a difficult to judge because it became such a big envelope. Alan Kay
famously said in 1997 "I invented the term Object Oriented Programming, and
this [C++ and Java] is not what I had in mind."

He would later even tell the Smalltalk community that even that wasn't spot
on, only an early step, to what he was really after in his visionary's quest
that would create a DynaBook that would increase world peace (visionaries
always reach big).

In my own journeys, my own personal conviction came to be that the "paradigm
shift" with the OO movement was to learn to bind behavior to data. This (IMO)
fits Kay's ideas about cellular biology and the inspiration he embraced with
his ideas. It's all about binding behavior to data.

Everyone should read that byte magazine (because apparently 500+ pages was a
magazine), especially Peter Deutsch's section on block closures. Smalltalk did
more with closures than any other language I've used since. Because closures
are objects too.

------
protomyth
Superdistribution was a pretty good book. I guess we went a really different
direction after the Visual Basic component years. I keep coming back to
Superdistribution, Visual Basic components (VBX/OCX), and the concepts from
Mirror Worlds[1] (tuples spaces) and wonder if there is still some paths to
look at down those roads.

1) Mirror Worlds: or the Day Software Puts the Universe in a Shoebox...How It
Will Happen and What It Will Mean

~~~
DonHopkins
Nice book, but it's too bad David Gelernter's brain went bad.

[https://www.washingtonpost.com/news/speaking-of-
science/wp/2...](https://www.washingtonpost.com/news/speaking-of-
science/wp/2017/01/18/david-gelernter-fiercely-anti-intellectual-computer-
scientist-is-being-eyed-for-trumps-science-adviser/)

~~~
protomyth
Given the false reporting done by the Washington Post on the Covington kids
when they had video of the incident, I place no value in their reporting.
Thanks for dragging politics into this discussion.

~~~
DonHopkins
Except for the fact that the Washington Posts' reporting in the vast majority
of cases, including Watergate and the Iraq War and Ukrainegate, is
historically dead-on accurate and excellent and award-winning, and when they
DO make mistakes, they actually admit it and correct it, as they already did
in the case you're complaining about:

[https://www.washingtonpost.com/nation/2019/03/01/editors-
not...](https://www.washingtonpost.com/nation/2019/03/01/editors-note-related-
lincoln-memorial-incident/)

Your attempt to dismiss everything they write out of hand because of one
mistake they already admitted and corrected, and instead mindlessly parroting
the propaganda of a pathological liar who never admits any of his many
mistakes, and has peddled at least 13,435 documented lies during his term, is
the very definition of dragging politics into this discussion, and it's
intellectually dishonest of you to project like that. You're the one who first
dragged a demented political hack-job into this discussion, who was angling
for a job in the Trump administration sabotaging science, denying climate
change, and promoting Intelligent Design.

[https://whyevolutionistrue.wordpress.com/2019/05/17/computer...](https://whyevolutionistrue.wordpress.com/2019/05/17/computer-
scientist-david-gelertner-drinks-the-academic-kool-aid-buys-into-intelligent-
design/)

>Computer scientist David Gelernter drinks the academic Kool-Aid, buys into
intelligent design: I’ve pondered at great length how a man can be apparently
as intelligent as Gelernter, yet so susceptible to the blandishments of
Intelligent Design—and so ignorant of the evidence that refutes it.

Honestly: Do you also choose to deny anthropogenic climate change (and
Darwinian evolution for that matter) and push Intelligent Design, just like
Gelernter does, and to gullibly believe Putin's propaganda about Ukraine
interfering in the elections instead of Russia, just like Trump does?

[https://yaledailynews.com/blog/2017/01/25/gelernter-
denies-m...](https://yaledailynews.com/blog/2017/01/25/gelernter-denies-man-
made-climate-change/)

>Gelernter, potential science advisor to Trump, denies man-made climate
change: “For human beings to change the climate of the planet is a monstrously
enormous undertaking,” Gelernter said. “I haven’t seen convincing evidence of
it.”

[https://en.wikipedia.org/wiki/David_Gelernter#Controversial_...](https://en.wikipedia.org/wiki/David_Gelernter#Controversial_positions_on_science)

>David Gelernter does not believe in anthropogenic climate change. In July
2019, Gelernter challenged Darwin's theories.

By the way, Gelernter's also a patent troll sell-out, whose ideas are
unoriginal:

[https://arstechnica.com/tech-policy/2016/07/apple-will-
pay-2...](https://arstechnica.com/tech-policy/2016/07/apple-will-pay-25m-to-
patent-troll-to-avoid-east-texas-trial/)

>Apple will pay $25M to patent troll to avoid East Texas trial

>When Mirror Worlds ran out of appeals, it gave up and sold its patent—to
another patent troll called Network-1 Security. In 2013, Network-1 created a
similarly named LLC, this time called Mirror Worlds Technologies, and filed
another lawsuit (PDF) in the Eastern District of Texas. The same patent, No.
6,006,227, was used to sue the same target, Apple.

>When Apple started to come out with features like Cover Flow and Time
Machine, Gelernter believed his own ideas being used. "I know my ideas—our
ideas—when I see them on a screen,” he told the New York Times in 2011, while
his case was on appeal.

To address your question: You may "wonder if there is still some paths to look
at down those roads", but if you follow the path that leads to Mirror Worlds,
you'll get sued for patent infringement by a troll.

So exactly which facts of that article do you disagree with? And where is your
proof that what the Washington Post and Yale Daily News and Wikipedia and many
other sources (including himself) say about Gelernter is false, and that he
does actually believe in anthropogenic climate change and Darwinian evolution,
in spite of his own quoted words? Or that he was the first person to invent
the idea of presenting documents in chronological order? Or is it all based on
your unsupported uninformed false opinion that everything the Washington Post
says is "fake news"?

Don't even bother answering if you don't have any proof.

------
RossBencina
This article was an excellent read. Like other commenters, it reminded me of
Clemens Szyperski's "Component Software" book. While a component model of
software production has succeeded in some niches, it has (so far) failed to
become a central organising principle for large scale software production. The
article suggests that new micropayment mechanisms such as blockchain might
enable Brad Cox's vision of pay-as-you-use software components. But I think
this misses the point. I would argue that software differs from physical goods
in important ways that make it a poor fit for this model. In particular,
software is cheap to modify, evolve and customise, but expensive to specify
independent of implementation (cf. Refactoring, Agile processes versus ISO
Standards Development.)

One issue that Szyperski's book examined was the composition model (i.e.
object model: COM, CORBA, JavaBeans, etc) used for defining interfaces between
software components and how those interfaces can be composed. The idea of
standardizing interfaces and composition mechanisms does not get so much
attention today. It seems that things are currently balkanized into language
communities, each doing their own thing.

The industrial manufacturing concept of "interchangeable components" comprises
of two things: (1) standardized specifications, (2) multiple independent
manufacturers who are able to independently implement specifications. We do
have this kind of practice in, for example, the specification and
implementation of the C++ standard library. But that's not how most software
is developed, and it is commonly held that it is not how most software should
be developed. On the other hand, we do have "component reuse" in the form of
software libraries -- but the interfaces are unique and idiosyncratic to each
library, not standardised, and there is usually a single "manufacturer".

------
mhd
I don't think this is a failure of OOP itself, nor does it come down to any
language's shortcomings.

I think it's mostly a market failure. The units of trade were complete
software products and/or services/apps these days. Mostly because that's what
the end users actually _see_ on the desktop, the atomic unit of software is
huge and perceptible.

Once upon a time, it looked like it could be different. Component-oriented
systems arose. COM, CORBA, CommonPoint, Taligent, D'OLE etc. (Heck, one might
even argue for Amiga filetypes and embedded X windows to belong in the same
category)

But apart from UI Widgets (ie. windows controls for VS or Delphi), it never
seemed like there was a marketplace for it. It also was quite hard to sell to
managers, as it's something you can't put in a box or have a big trademark for
it.

So we ended up with fast food joints and black-box-systems, no ICs,
condiments, recipes etc.

Open Source could've been an answer/alternative, but the OSS world mostly
copies stuff from the programmer's day jobs.

(Gnome actually started out with CORBA and KDE had some component structure,
too, but that doesn't appear to be anyone's focus, compared to copying
notifications and doing flat redesigns)

------
jasim
Two interesting ideas that helped me get insight into OO:

1) An object is a poor man's closure; and a closure is a poor man's object.

2) Most object-oriented programming is done with mutable state, which muddles
what OO is. You don't really need state to have objects.

In a closure, the lexical scope is preserved, and the functions defined in it
can access it any time. This is very similar to what an object does:
[https://stackoverflow.com/a/2498010](https://stackoverflow.com/a/2498010)

When objects are immutable, one gets to truly appreciate the elegance of
grouping both data and their functions together:
[https://dev.realworldocaml.org/objects.html#scrollNav-3](https://dev.realworldocaml.org/objects.html#scrollNav-3)

~~~
tudelo
RE 2 do you mean you don't need to have mutable state to have objects? Unless
I have a huge misunderstanding of what you mean by 'state', it seems like
state is necessary for objects to exist..

~~~
lelandbatey
I think the grandparent comment is saying that you do not need your state to
be mutable; that writing software where your state is immutable is useful and
elegant.

~~~
tudelo
Yeah that is what I was trying to clarify, I guess my comment was equally
confusing because I am getting responses trying to fill in words that the OP
may or may not have left out accidentally

