Hacker News new | past | comments | ask | show | jobs | submit login
Alan Kay on the Meaning of “Object-Oriented Programming” (2003) (purl.org)
395 points by tosh on Mar 17, 2019 | hide | past | favorite | 292 comments

I still don't really understand what the big idea of message passing compared to a method call is.

Is it supposed to be asynchronous? It never seems to be so in practice. It it supposed to be remotable? So are method calls. Are languages like Objective C and Ruby message passing to some extent? Why is that message passing rather than a method call? What's the essential difference? Is Java message passing?

The big idea of message passing is that it's not about message passing, I think what Alan Kay has been trying to say all these year is that we should look at how bacteria communicate, and try to mimic that.

They communicate by sending protein "messages" to their vicinity, and if a close-by bacterium knows what to do with it, it processes it, and responds by either sending out their own proteins (message passing), OR change their internals to get receptive to different kinds of messages (change internal state).

From this viewpoint, coding starts being about cultivating a literal society of objects (cells). The obvious advantage being that biology scales an immeasurable amount better than whatever architectures we ever come up with. He often gives the example of the internet as a well-designed system, it operates in a similar manner, objects (anything that can self-identify with an IP-address) can join/leave at any point, and the system itself stays alive.

I'm not 100% sure if this is what Alan means, but after having following his talks and writings for a couple of years, I think that's the gist of it.

> The big idea of message passing is that it's not about message passing, I think what Alan Kay has been trying to say all these year is that we should look at how bacteria communicate, and try to mimic that.

It is such a horrible programming model, though.

Making calls and sometimes they get picked up and sometimes not?!?

If we're talking remote processes, sure, that's understandable, remote resources are not always available.

But in-process? If I call a function on an object, I want the compiler to tell me right now if this code makes sense, and the compiler should stop me right away if that code cannot possibly work once deployed.

I don't think it's about sending a message into the void and "hoping" that a compatible agent receives the message.

Rather, it's about your object not having to know who is handling the message, only what the message is ("what" in both content and type).

It's still up to you as the programmer to make sure that you have implemented the other objects to act on whatever message types your code will pass. Just like it's up to you to make sure the method exists in the target class when the caller tries to invoke it.

Later on, if you decide to swap out your logging infrastructure for a different one, you don't need to update all the places that called the old one. You just tell your new logger to listen for messages directed at Logger.

> I don't think it's about sending a message into the void and "hoping" that a compatible agent receives the message.

Except that is the bacterial model. When you want something done, you signal for it by releasing proteins, and keep releasing proteins until you're satisfied even if the work ended up being done 50 times more than you needed. That's why biological systems so commonly overreact to stimulus. The biological method is to flood communication until whatever demanded response is met. It's the equivalent of a user mashing a button until a program responds, like the elevator button problem.

Actually, elevators seem like an extremely good analogy for this kind of asynchronous service system.

I understand we're not trying to perfectly mimic the analogy, but I think it's important to see that nature's model, while robust and asynchronous, carries significant problems due to how it communicates. It's that very robustness that we're trying to mimic, so we should expect to inherit some of the problems that go along with it.

Yeah, the biological model is much more messy. I think we can take some lessons from it while still enforcing a "God Mode" on our local machine, ensuring that a compatible process is always running to receive the message.

Though, as GP stated earlier, this does make some more sense from a distributed networking perspective, where no machine has control over the existence of its peers. In that case, having a setup where the message is sent out on a service bus to be snatched up by a compatible listener is closer to the biological analogy.

Great insight! To add to this: To counteract, one needs to implement some kind of synchronization device, which has bring plenty of other problems. It would be exiting if nature would choose a more snychronized approach if it had the choice.

But is it possible for the compiler to check that some object was actually set to wait to receive a particular message that another object was instructed to send? When coding with method calls, it is possible to accidentally leave a stub instead of method code that is supposed to produce some important side effect, say, but at least you know where to look.

In his book programming erlang Joe Armstrong talks about how erlang is OOP in the original sense. You spawn a bunch of light weight processes, and each process is like a living function with its own brain. Pony is a new similar language. Each process can only communicate with the outside world through message passing. You can easily define the semantics of sync vs async, and what to do if something fails. If something does fail, you've decoupled the error handling, and you can respawn processes, even if it was caused by a flipped bit of ram. If you're just synchronous message passing, then it should be just as reliable as java/ruby, and not much slower.

There's still some down sides in that now you can easily get race conditions on human error, you're dealing with distributed systems problems. You have to worry about defining your communication interfaces and keeping them updated, and resilient to change. Message passing has a lot of trade offs.

> Making calls and sometimes they get picked up and sometimes not?!?

I like that paranoia is the default. Once you get used to it, easy stuff remains easy and hard stuff becomes less scary.

> But in-process? If I call a function on an object, I want the compiler to tell me right now if this code makes sense, and the compiler should stop me right away if that code cannot possibly work once deployed.

You are going of a tangent; What you are refering to is a matter of typing and compiler help, and not related to the OO paradigm as thought by Alan Key. (If there is a relation, I would be happy to learn it). Put more concretely: I don't see why a Java compiler would be more warning than a typed Smalltalk compiler.

>It is such a horrible programming model, though.

It's an honest programming model that can deal with real life. A lot of modern systems converged to Kay's OOP model, except the implementations are implicit and created by continuously tripping over problems that were already solved in the 80s.

JavaScript is SmallTalk minus elegance, minimalism and orthogonality.

Web pages with JavaScript are self-interpreting data in the exact same sense Kay's objects are.

Microservices are crude, bloated objects that communicate through routed messages.

Containers are a validation of Kay's idea that sharing encapsulated objects is easier than sharing data.

Most people commenting here about SmallTalk either never watched any of Kay's talks, or simply are too dumb to understand what he is talking about.

>But in-process?

Most of the real programming problems these days are not in-process.

in-process vs. remote is as relative as "right now". It depends on the scale you're looking at/waiting for.

The interesting thing with the bacteria/biology metaphor is that the time scale variation is huge, from micro to macro.

Your compiler is life itself and the validation is survival, with evolution.

Sure, it may not be enough to be practical for computer-based stuff, but for a resilient/scalable system, that's a very interesting and enlightening angle to look at.

> Making calls and sometimes they get picked up and sometimes not?!?

It's the basis of mainstream OOP as well. When you make a method call, you only know by loose protocol what effect it has, which objects it has an effect on or even whether it has an effect at all, as opposed to manipulating the data structure directly. It is the recipient of the message that decides what effect it has. This doesn't preclude having some mechanism to tell whether the message was accepted or not.

IMO the significant difference between something like Smalltalk-style message passing and Java-style method calling is that an unknown message in Smalltalk is a run-time "error"—which is passed as a #doesNotUnderstand message to the object that invoked the original message—while in Java it can be checked statically because the object definition specifies what messages it somehow handles.

A simple one-way message passing OO architecture in C, using a global set of messages:

    #define SEND(_objp, _msgp) (((struct Object *)(_objp))->dispatch((struct Object*)(_objp), (struct Message*)(_msgp)))

    enum MsgStatus list_dispatch(struct Object *self, struct Message *message)
        switch (message->method) {
        case MSG_INSERT:
            insert_into_list((*struct List)self,
                             ((*struct InsertMessage)message)->index,
                             ((*struct InsertMessage)message)->value);
            return STATUS_OK;
        case MSG_DELETE:
            // ...
            return STATUS_OK;
            // Maybe the parent implements the message
            return SEND(((*struct List)self)->parent, message);

            // or we decide that this isn't a valid message for this object
            return STATUS_NOTUNDERSTAND;

    // ...

    // Construct and send an INSERT message to someList
    createInsertMessage(&insertMessage, 0, "hello");
    status = SEND(somelist, &insertMessage);
Now, every object is encoded by a struct starting with an Object struct which contains a generic dispatch function pointer, so that each object can encode their own dispatch logic and handle whatever messages they receive as they see fit. In this case, the ListMessage struct also has a parent field, which it defers unknown method calls to. If it did not, it could just return a method-not-found status code. It doesn't have to know whether the parent implements the message.

Likewise, every message starts with a Message struct which contains the message type/method name. Depending on the type it can be cast into more specific messages.

An interesting aspect of this architecture is that it doesn't explicitly implement any kind of inheritance logic. You let the objects handle that themselves. A possible benefit of having any object accept any kind of message is that you decide whether it's an error that an object could not receive a message or not. Maybe you don't care that the object couldn't receive e.g. a NOTIFY_CHANGE message.

There's just not much about this that's unique to OOP. It's just defunctionalization, combined with "add another layer of indirection" to implement dynamic dispatch and something like existential types. It can definitely be useful in some cases, but the way OOP proponents frame this does not seem very helpful. Always remember the YAGNI!

It's true that there is not much about it that's unique to OOP, but that can be said of most high level programming concepts. They're just the cobbling-together of lower level abstractions.

I just wanted to demonstrate why message passing via dynamic dispatch isn't so fundamentally different from static dispatch, conceptually, and to which extent it really is "making calls and sometimes they get picked up or not" compared to any other dispatch mechanism.

Kay might disagree with this on the basis that late binding is important and that an object should not need to know what messages it will be passed at run-time; that the message is just that, a message rather than a contract. The only contract that exists is that an object should be able to receive a message, any message. But even with checked method calls like in Java I think the fundamental idea that you shouldn't have to consider what effect a method has on an object remains. What a language like Java adds is contracts like abstract classes or interfaces, essentially a way to tell the type checker what messages may be passed to an object.

I agree that this approach can be useful in some cases and also think that you most likely not only AGNI, but that it will cause serious headaches and existential dread when applied to the wrong problem. So will Java. IMO OOP is suffering a bit of a backlash not because it's a particularly bad, but because like a Swiss army knife it looks deceptively applicable to a wider range of problems than it really is.

"It's just defunctionalization, combined with "add another layer of indirection" to implement dynamic dispatch and something like existential types."

But isn't this a pretty good working definition of "Object Oriented Programming"?

> It's the basis of mainstream OOP as well.

Not at all!

If I call a.foo(), I know for a fact foo() will be called. That statement is not just going to be ignored and dropped on the floor.

What it will do, now nobody has any idea, whether the call is local or remote.

> If I call a.foo(), I know for a fact foo() will be called.

Likewise, when you send the message foo to a, you know it will be received. In both cases, the only way you'll ever have an idea of its effect, if any, is by knowing the state and implementation of a at the point of sending the message or calling the method.

What is different is how you communicate that a method might have had some effect. In Java I'll know at build time whether a method definitely wasn't considered because of its type system. It won't compile if I have not defined the method being called. In a dynamic language like Python, calling a method that is not defined results in an exception. In Smalltalk, an Object receiving a message with an unhandled method will send a #doesNotUnderstand message back to the sender. The default behavior of an Object receiving the #doesNotUnderstand message is to raise an exception. Unless you explicitly override the #doesNotUnderstand method, this becomes an implementation detail and really isn't much different, especially compared to dynamically typed languages with OOP facilities.

So as I said before, the message passing mechanism doesn't preclude communicating whether it was handled. Messages get ignored if that's what you want. Java IMO really has the same potential for run-time uncertainty thanks to exceptions.

Sounds more like what many people would call event-drive architecture, these days.

People tell me Ruby is message passing, but it doesn't seem to match your description at all.

Just to add to the previous poster's comments, Alan Kay seems to also consider the internal state of a "cell" (or object) to be private and responsive only to external messaging. If you extrapolate this to a network I personally see a "colony" of objects that are not "aware" of each other aside from their neighbours' messages, and the whole thing starts to seem like it has a lot of the features we see in modern functional programming today. I don't think this is particularly coincidental, since functional approaches are born from lambda calculus and Kay mentions in these emails a desire to incorporate algebras into his objects (aside from the biological nature of this type of cell communication).

Alan, if you're in the comments section I'd love to hear your thoughts on what you think of the resurgence of functional programming in the modern day and whether anything, other than perhaps languages like Erlang, has approached your original ideal.

I don't think I've watched it previously, and I don't have the time to now, but I have to imagine Joe Armstrong interviewing Alan Kay would at least touch on that (although from one comment it sounds like they didn't do more than that).


Watch it!

I just watched the whole thing, based on just seeing your link.

It does talk about the historical connections between Smalltalk and Erlang, but goes far beyond that to remind us to think more about the bigger problems we need to solve, and get less caught up in doing things just because we have a tool that can do them or other people are doing things that way.

I found it truly inspiring.

> Alan, if you're in the comments section I'd love to hear your thoughts on what you think of the resurgence of functional programming in the modern day and whether anything, other than perhaps languages like Erlang, has approached your original ideal

Someone asked a similar question when he did an AMA, but the answer wasn't clear to me:


Ruby syntax & behaviour is in a hazy middle ground between message passing and method calling. Writing Ruby in a message-passing style feels natural and easy, which is why many Rubyists will recommend it, and doing so enables looser coupling between objects.

To me, it's the difference between pulling the strings on a puppet (method calling) vs placing a request to a colleague (messaging).

However I don't think anything in the Ruby docs ever claims that it has a message-passing intent. I think the option to write in this style is really a byproduct of duck typing combined with a Ruby object's ability to arbitrarily and dynamically reconfigure its own dispatch table. If Matz really intended a message-passing style, then Object#send would've been split into Kernel#send and Object#receive, and there'd probably be a Message class rather than just using symbols.

Given the new shorthand of object.:name in 2.7 for object.method(:name) one could even argue that Matz is drifting Ruby back towards an early binding style. However I don't see myself using that syntax much outside of block parameters to Enumerable methods and the like.

Overall I think the stylistic options are a strength and we don't particularly need to resolve the dichotomy to write good Ruby.

> However I don't think anything in the Ruby docs ever claims that it has a message-passing intent.

There is of course the `Kernel#send` method, which implies the model is messages being sent. It's not called `call`.

I think you mean Object#send? I actually think that's misnamed, I think it should be named Object#receive because that's what happens - you're forcing an object to receive a symbol for (by default) dispatch through its method table.

The fact it isn't invoked for regular method invocation is kinda indicative of Ruby _not_ being a pure message passing language, even if we like treating it that way.

> I think you mean Object#send?

It's definitely Kernel#send. Object defines no instance methods at all.

    > Kernel.instance_methods(false).include?(:send)
    => true
    > Object.instance_methods(false).include?(:send)
    => false
    > Object.instance_methods(false)
    => []

Fascinating, the docs are misleading on this point. http://ruby-doc.org/core-2.6.2/Object.html#method-i-send because Object includes Kernel and it's just easier to find, I guess.

That's a somewhat arcane implementation detail though. What about the meat of the discussion?

I don't think Ruby is message passing either because the caller determines which method to call, based on the object's method table. #method_missing is just another method to be called by the caller - it isn't an example of the object being able to determine what happens. I think #send is really #call, but I think the name implies someone somewhere thought it was message passing.

If I'm being uncharitable I think applying the term 'message passing' to Ruby is a bit of making something sound more sophisticated and elaborate than it is. They're method calls - it's pretty simple already and doesn't need a complex concept on top of that.

It merely remains to fork and/or invent a language that otherwise looks almost exactly like Ruby but with message passing only.


Elixir has conventional method calls by default, and only optional message passing with a separate syntax. It's definitely not message passing everywhere.

It’s message passing everywhere outside the current object (process).

It’s a little tough to compare to an OO language because there are no classes, but all object to object communications are asynchronous messages.

> To me, it's the difference between pulling the strings on a puppet (method calling) vs placing a request to a colleague (messaging).

This is the way I think about it as well, the more concise notion is ask don't tell. By don't tell I'm thinking of telling an object to change it's state directly. This distinction can be very, very subtle.

Ruby to me seems to follow this description rather well. The "cells" in close vicinity are the objects in ancestor chain. If the "closest cell" - object's class - can't respond to it, next further, which is its parent, will be given chance and so on. Also you can ask object if they know how to respond with `respond_to?`

It's a bit more involved than that - modules can be included and prepended. In a sense "re-arranging the which cells are close and further". Also `method_missing` can be implemented. Beautiful in my opinion.

Here's one talk about 'message' in Ruby:

RailsConf 2015 - Nothing is Something


But to me, this simply looks like Functional Programming done in an OOP mess.

Smalltalk was also influenced by Lisp work being done at Xerox PARC.

Map, filter, flapmap, lambdas, symbols, are all there.

You can easily do LINQ in Smalltalk-80 without any additional library.

FWIW I had Alan Kay's biologically-inspired "message passing" approach when I wrote a routing library : https://github.com/jdonaldson/golgi

The premise was to start with a standard class-based instance, and then use metaprogramming to autodefine a corresponding ADT for all possible typed responses from the API.

This gets around the main problem of message routing (IMHO): Namely, if you're relying on a host system to interpret your message how it sees fit, then you don't really know how it will respond in a given case. Will it error? Will it call something different than what you thought? The solution is just to enumerate every possible response type, and return a corresponding composite type as an ADT. I was doing that manually in a few prior cases, and then realized I could use the macro programming features in Haxe to do something automatically, and put together a library as a POC.

I'm certain that FP purists would argue that this is equivalent to some other pre-existing type transformation... but I haven't found it yet. I'm interested if anyone else has been experimenting in this area.

> The obvious advantage being that biology scales an immeasurable amount better than whatever architectures we ever come up with.

Not sure what you mean by this.

In the general case, the hard part of "scaling" is building systems that are easy to reason about / maintain as the complexity / throughput increases.

Meanwhile, biology is notoriously difficult to reason about and maintain (think medicine).

Conventional software engineering is extraordinarily brittle, in the sense that any random mutation to a program is likely to cause catastrophic failure: it is very likely that it will not even be valid anymore, or behave in a completely different way. In this scenario, complexity is the enemy, and we take all the well known-measures that are taken in human engineering to manage complexity.

When one dreams of a biological approach, one is interested in replicating the "graceful degradation" aspect of biological structures, which is the opposite of brittleness. If you remove one random cell from your body, it will make virtually no difference. If you remove 1K random cells, you will still not notice anything. If you remove 1M cells, maybe it's not so nice, etc. Biological systems have fewer central points of failure and are much better at adapting to unexpected situations. In this case, complexity can be our friend. I believe it is possible to learn how to design systems more like nature does, and I think this is Kay's dream.

Are you sure about this? (I haven't read the article yet) - I am asking because what you are describing looks to me closer to https://en.wikipedia.org/wiki/Tuple_space than "traditional" OOP.

I said I wasn't sure, but it's what Alan keeps coming back to, his roots in biology and mathematics, scaling, and scaling through communication.

It's obviously a bit wooshy, but I'm pretty sure that's the gist of what he intends to get across.

It is more than a bit wooshy. The 'solid' parts are all "biology", "mathematics", etc.

Alan Kay is, of course, correct that nature provides excellent guidance on how to build resilient systems. I would think that thought is fairly self evident to the subset that is preoccupied with such thoughts.

But as the endless exegesis of still-with-us Dr. Kay's historic utterances here and elsewhere demonstrate, there is this very obviously missing link (or unkindly 'hand waving') between "his" 'big idea' and how could we possibly build bio-morphic computational systems with the current state of the art and craft of building software. At which point one could ask the question "why are we discussing these wooshy notions?"

His point is, as far as I understand, that we should keep studying biological systems as inspiration for our computational systems, not for the short term (5-10 years) but long term (100-500 years). Sussman equally has famous lectures on the same topic, "we don't know how to compute"

The why is not to be practical right now, but to research these ideas to the point where they do become practical, no?

But then again, I agree with you, he does keep going on about the same ideas again and again and again, without pointing to the slightly more practical details... Basically ever. It's always "no what we do is bad, all of it, haha don't we suck?" "Look at these cathedrals and ant hills! Why can't we do that???"

> The why is not to be practical right now, but to research these ideas to the point where they do become practical, no?

I do agree. Vision can furnish the impetus. A good "Research" effort must bear fruits, and result in "findings".

For example, in my mind, what is not addressed by Alan Kay is whether there will be any meaningful distinction between software and biology in an age where his vision is realized. Clarification (or development) of this key point would inform research efforts.

A second problematic aspect has to do with developmental methodology. Should we be working at the genotype or phenotype level? If the former, see above question. If latter, is not raising cattle somewhat analogous to tending to your objects? (Meaning: cattle herders are not biologists ..)

Related to this is the issue of scale. Biological systems are indeed majestic, and no wonder: look at the range of scales of biological mechanisms! Even our current wonder crust of layer upon layer of the net's stack amounts to nothing more than an active biological membrane buried somewhere deep in some micro corner of an 'biological organism'. The biological organism is stupendously complex and its component elements' sizes range from micrometers to meters. We have 10^13 cells in our bodies. And a single cell is by itself a marvel of complexity.

Are those scale ranges and complexity orders necessary for biological magic? Are we to attempt this with code on digital processors?

And let's not even discuss the various interactions that a single biological organism has with its environment and fellow creatures (at multiple layers of organizational structure, concurrently!).

And here, finally, we arrive at the actual wonder factory, an ecology where selection comes into play.

It would seem, at this point, to concede that perhaps the 'cattle ranchers' may not be so wrong headed, after all.

Yes, I thought that the cell analogy resembled tuple spaces as well

Sussman of SICP fame gave a similar analogy. Not really about messaging but about biology in general - https://www.youtube.com/watch?v=O3tVctB_VSU

Method calls are messages. Same thing.

To better understand what Alan Kay is talking about it helps to know a specific fact about Smalltalk, the language he invented. Unlike in C++ or Java or JavaScript Smalltalk objects can not have public data-members. They can only have methods. Methods can read and write the "instance variables" inside the object, no-one else except the object itself can.

This means that objects can only communicate by calling methods. They can not read or write data stored into another object. Thus they are totally ignorant about the "implementation" stored inside other objects. They can "send it messages" meaning call its methods. But because all communications with an object are processed by the methods of that recipient object, the recipient object itself is wholly responsible for the MEANING and effect these method-calls have. "Message-passing" is an apt metaphor for such a thing.

This is "message passing" because it is the recipient object who must ALWAYS INTERPRET ALL messages it receives. Therefore method-calls have the semantics of "messages".

Whereas in Java and C++ and JavaScript the caller of an object can also directly modify other objects without the object being modified being "aware" it is being modified. The "meaning" of such modifications is then determined by whoever makes them. Say I store into your object the property "you.age = 253". But what does that mean what does that signify? Only the other object which stored that value "knows" why it did that, what it "means" what implications it should have for the later progress of the program.

So even though in Java etc. you can implement message-passing simply by calling methods, you can also do other things to the object from their outside than only send them "messages". An Object-Oriented language according to Alan Kay (according to me) then is one where the only way objects can communicate with other objects is by calling methods i.e. by message passing, never by modifying the objects directly from their outside.

100% agree that in object-oriented programming, only the object itself should care about its internal state. The object should know about its behaviors and what state changes are actually valid.

In Java, many projects overuse getters and setters. Getters and setters break encapsulation. More so public setters since they allow any piece of code anywhere can change the value of an object's internal state at any time.

Consider a class Person. A person has a name, SSN, salary. Let's assume name and SSN cannot be changed. What about salary? We don't have to add a setSalary() method. Instead, we should have something like: public void acceptNewJob(Job newJob) { salary = newJob.getSalary(); }

This is one more layer of abstraction that was unneeded. In your example you went through all the trouble to get the same result. Where as there is no notion of a Job inside the person encapsulation (since you don't set a job field ). It's just the salary the object cares about. So why pass in a Job. It so confusing for the reader.

That example is correct, because in OOP you should not simply change state/value inside other object, because you pretty much end up with anemic domain model where classes are just structs and code that modify it is not part of the class. What parent meant was that you call "accept new job" on person object and then use this object instead of just setting the salary. Maybe it was oversimplified, though.

„[...] in OOP you should not simply change state/value inside other object, because you pretty much end up with anemic domain model where classes are just structs and code that modify it is not part of the class.“

Yep, and that’s pretty much the simplest and most modular design for that problem.

This is one of the cases where OOP(whether using methods or messages) leads to more coupling and less flexibility.

I know what the parent meant. Your explanation now directly contradicts what the code does because even with the Job method you are directly setting salary except now it's a more circuitous way for the sake of abstraction. The person class has no notion of a Job.

MVC is not OOP (at least not what OOP was originally) and it's a shame that corruption of the idea was allowed to creep in.

I do like your use of the word "anemic" to describe the method free classes that pass for objects these days.

What you are referring to our container classes which exist solely to hold members. This is not that case. The setting of salary is one of many behaviors offered by the class.

I especially agree with "This means that objects can only communicate by calling methods". I think we can add another thing - the object can also expose readonly properties, which allows external modules easy accessing the object's state but cannot change it.

Read-only properties would be implemented as methods that return the value of an internal variable. There should be no way to directly access the internal state of an object, even if it is read-only -- because that would prevent you from changing internal details (eg. do you store the property in a float or a double?), breaking the "encapsulation".

Sure, if not implemented as methods that return the value of an internal variable, how can one achieve the "readonly" goal?

I tried hard to understand what you were saying but I still see nothing that messages do and Java methods don't, or the other way around.

Everything you say is confusing to me, but I'll just pick on one:

> An Object-Oriented language according to Alan Kay (according to me) then is one where the only way objects can communicate with other objects is by calling methods i.e. by message passing, never by modifying the objects directly from their outside.

What guarantees that when I send a message to an object, that object won't decide to change itself in response?

In other words, that distinction you are trying so hard to build between message passing and function calling is nonexistent.

> What guarantees that when I send a message to an object, that object won't decide to change itself in response?

I don't think that's what the other commenter is saying.

They're saying that internal state is not directly manipulable from external sources. So an object `x` cannot have a method that takes in another object `y` and does `y.foo += 3`. Instead, it must do something like `y.addFoo(3)` to accomplish the goal.

This is superficial in some sense, but then every distinction between various programming languages is superficial if you look at it right. :)

The idea is that the object itself is the only thing which can manipulate its internal state. Therefore, to use these objects, the programmer must have imbued them with the necessary methods. This is restrictive (you have to implement the methods before you can manipulate the state), but this is the core design principle I think the other commenter is trying to tell you about. Objects cannot be manipulated willy-nilly; they must have implemented some method which can be called from the outside. This restriction creates a much more solid barrier of distinction between the responsibilities of the various objects at play.

Right and that is a subtle notion. You may not need that level of encapsulation the same day you are programming it, but as the system grows it gives you benefits in maintainability including observability. That may be why benefits of Object Orientation may not be so obvious to understand on first look, you need to consider the future evolution of the system. We need to see the forest from the trees.

Here's an example of the distinct nature of Smalltalk's message passing:

Control structures

Control structures do not have special syntax in Smalltalk. They are instead implemented as messages sent to objects. For example, conditional execution is implemented by sending the message ifTrue: to a Boolean object, passing as an argument the block of code to be executed if and only if the Boolean receiver is true.

The following code demonstrates this:

result := a > b ifTrue:[ 'greater' ] ifFalse:[ 'less or equal' ]


But there's nothing specific about message passing to that example, you can do this with regular function calling:

For example in Kotlin:

    class Boolean {
      fun ifTrue(closure: -> ())
      fun ifFalse(... whatever)
    a.ifTrue { /* do something */ }

Your class will not work without language-level if/else construct or something equivalent. In Samlltalk if/else is implemented purely through message passing. There is no "real" if/else.

IIRC, Smalltalk has Boolean class and two subclasses of Boolean: True and False. There is a single method with two arguments (:ifTrue:ifFlase). The method is then overloaded. True calls ifTrue argument. Flase calls ifFalse argument. This is happening dynamically, at runtime. Again, the mechanism is generic enough to fully replace all use cases of "traditional" if/else constructs.

Clearly, you haven't thought this through.

Edit: People here would do well to read this: https://pozorvlak.livejournal.com/94558.html

It still doesn't seem to have anything to do with message passing vs method calling. Yes, Java doesn't implement if/else as syntax sugar like that, but it could, and it could use virtual methods to do it and not have to implement a special kind of message-passing method. There is nothing preventing you from writing a SmallTalk on the JVM that uses regular ol' Java methods to do the same thing. So the question remains: what the hell is "message passing" and what differentiates it from a virtual method call?


My best guess, especially given Alan Kay's statement "I wanted to get rid of data" is that it is more of a style of coding than a technical distinction. I could be misinterpreting him and it would be nice if he would mention a small and concrete example that illustrates the True Meaning of Messages.

I see it as the style of coding you run into reading AST-processing code in Java, where, because the language lacks discriminated unions and pattern matching, you don't simply look at the `expression` object you're given and see that it is an `AdditionExpression(LiteralInteger(1), VariableName("x"))`. Rather, you politely ask the expression to describe itself to your own `visitor` object, and the structure of the expression reveals itself by calling `visitor.VisitAddition(leftSide, rightSide)`, and the left side calls `visitor.VisitLiteral(1)` and the right side calls `visitor.VisitVariableName("x")`. Data has been reformulated into a series of calls.

That is the same pattern of coding as having booleans be defined by whether they call the ifTrue or ifFalse branch.

Subjectively I despise programming that way and much prefer using a language that lets me define my data structures immutably and precisely without code, then process them with compiler guarantees that I handle all cases. Reading the data types is the fastest way to understand what a piece of code is trying to accomplish. As Fred Brooks said:

> Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious.

Clearly, you haven't even bothered to read up about Kotlin.

    (a == 1).ifTrue {
      // ...
    }.else {
      // ...
is standard Kotlin with its DSL syntax.

`ifTrue` and `else` are extension methods added to the `Boolean` type.

You know that there have been new developments in PLT since Smalltalk, right?

>`ifTrue` and `else` are extension methods added to the `Boolean` type.

You don't seem to understand what this discussion is about. Extension functions in Kotlin are statically dispatched, so while they are a nice feature, they are completely irrelevant here.

It's not about how your invocation code looks like. The important part is that at some point the code needs to make a decision whether to invoke "if" case or "else" case. Smalltalk achieves this by having two objects/classes (True and False) that handle the same message differently. The implementation of those objects does not have a hidden control flow statement. Your code would.

Which leads to the well known callback hell, no?

I think "callback hell" in this case would be caused by "boolean blindness" and/or lack of abstraction.

General-purpose programming languages, by their very nature, cannot provide us with constructs that are specially suited to our domain; they can only provide us with generally-useful building blocks, like booleans, integers, functions, etc.

We could try to solve our problems using only those languge-provided constructs directly, e.g. using maps-of-lists-of-booleans-of-whatever. In that case, we get "callback hell", pyramids/triangles of doom, etc. because the same code needs to implement our solution and encode information about our domain.

Alternatively, we can use those language-provided constructs to write our own domain-specific constructs; then use those domain-specific constructs to solve our problem.

I've worked with people who avoid this second approach because understanding the solution requires learning those domain-specific constructs, whereas in the first approach we already know how built-in constructs like booleans work.

The nice thing about Smalltalk in this example is that if/then/else, loops, etc. are not built-in; they're library code. This takes them off the pedestal that they occupy in other languages, and makes it easier to think about replacing them with our own tailor-made alternatives.

(Note that this isn't specific to message-passing style OOP; we can also do this with e.g. recursive functions for loops, induction schemes (e.g. Church encoding) for control flow, etc.)

To be pedantic, ifTrue/ifFalse is actually compiled into the method in Smalltalk 80 and not sent as a message, as an optimization.

In practice, Smalltalk methods tend to be very short. Having too many "callbacks" (really it's just asking Block objects for their value; the "lambdas" here are just objects too) is an issue of how you are designing things. A lot of if/else stuff is avoided simply by taking advantage of the kinds of polymorphism Smalltalk allows in the first place.

Funny enough, C# squeezes by this test as any Field declaration can be seamlessly replaced with a Property that injects getter and setter methods.

A feature borrowed from Eiffel and Delphi.

This is a very good explanation of the correct answer.

I've always wanted to understand the distinction to be that messages are always by value, never by reference. Think of the actor model, or a microservices-type system: Individual objects don't have any shared memory, so all they can do is pass around messages that describe information they know.

In short, the distinction between a system with method calls and one with message passing is whether or not it allows for spooky action at a distance.

But this distinction isn't really carried through by all the systems that claim to be based on message passing.

That is an important distinction yet I prefer to think of message-passing a bit more generally. There are different kinds of message-passing, the one that avoids spooky action is just one of them, a good one but still not the only one.

Say I look over my cubicle wall and see you and "send" you a paper-airplane. The paper-airplane is the message, I can assume you will become aware of it having landed on your cubicle. Perhaps there is a question written on the paper. Because I have good eyesight I can spookily already see your answer without you having to send that paper-airplane back to me. It's still message-passing even though it is spooky.

The real distinguishing feature of "message passing" I think is that the recipient is free to interpret the messages they receive, and that no-one else can, that no-one else can modify the "mental state" of the message-recipient than they themselves, as a response to the messages they receive.

I think I must be missing a detail, because it feels to me like your spooky example of peeking over the cubicle wall is in direct conflict with the idea that nobody can modify an object's mental state except by passing objects to it.

If you're really only allowed to communicate by passing messages, then, to me, that would imply that there's some sort of "no peeking" rule.

Say I send you a mutable object as an argument of a message / method-call while holding on to that same object in some variable as well. You modify that object without telling me. Does that mean you are spookily communicating with me?

I would say no because you are not communicating with me at all. I am unaware that you have modified the object.

I can ask that object about its state and that way detect that somebody has changed it. But how can I ask it about its state? Only by sending it a message/calling its method, IF I am using an "Object-Oriented Language".

There is nothing spooky about multiple observers having access to the same shared mutable object, is there? They can still only extract information from it by - sending a message - in an Alan Kayesque Object Oriented language.

But in a hybrid language like JavaScript where I can directly read and write and even delete fields of an object it is spooky, this value was just here and now it is no longer and I don't know why because somebody else somewhere deleted that field.

Gotcha. I think, then, that the devil is in the details. That Erlang ets module I mentioned elsewhere does something like this, where it creates an entity that acts as a shared mutable data store, but communication with it still happens by message passing, so you still get certain guarantees. Notably, if you query it for some information and then get your response back, that response's contents aren't going to change on you. So the behavior's really more like querying a database than getting a pointer to a mutable data structure.

There's maybe a "letter of the law vs spirit of the law" thing going on here, too. Having an object that just implements an array feels like it's violating the basic idea, even if you are technically only communicating with it by passing messages. I'm pretty sure the original vision was that objects would represent interesting entities from the business domain and not just be heavyweight analogues for basic data structures.

Yeah, I don't think that ever could work. And that's kind of a problem. At best, what you would have is a system where different parts have "names"/ids but there's no guarantee that a named object is going to actually be present and so each object has to be able to handle the situation when the object isn't there.

There are plenty of Erlang programmers out there who, I'm guessing, would happily tell you that it works quite well.

(Just don't ask them about the ets module.)

Ruby works like that, and it works just fine, unless you're dealing with developers that are intentionally trying to mess with your head.

The reality is that most of the time when the world is dynamically changing, there are reasons for it.

E.g. the typical Ruby example is an ORM where the methods available reflects the database schema of the database you just connected to. Whether you get an exception because the method you expected doesn't exist, or get an exception because some generated method got an error from the database doesn't matter: You need to deal with it either way.

All that changes in practice is the name of the error condition you need to handle.

What's a reference, though? In actor based messages, you can pass the mailbox id, which is not a memory reference, but semantically a "reference" to the object encapsulated by the actor.

I think messages help better convey three things:

1) that objects are acting like async, independent machines (I picture the erlang actor model, but maybe that’s a stretch.). This makes messages a more natural way to emphasize that these actions can happen across networking or threading boundaries;

2) the message is explicitly not inheritance and therefore doesn’t contain implementation. This means it’s really a messaging contract and only the interface matters!

3) redirecting, storing, or replaying messages are each very powerful concepts that lead to many useful features (networking state, save/load, undo/redo, logging, debugging). Thinking in these terms is much harder to express with functions, so I see this as messages are the _data_ of function calls, and function calls are more the _act_ of calling.

That said, I’d love to hear other’s take on this as well. The differences from functions are subtle and it doesn’t help that we use the same/similar words to mean different things.

In most systems, try to call a method that doesn't exist on something. It will bark at you.

In the types of OO systems Kay discusses, it's up to the receiver to determine whether or not to respond to a message and how to behave if it doesn't "understand" the message. That's how you get maximum polymorphism as well -- for example, all kinds of objects are "cancelable" if they respond to / understand the message #cancel, etc, etc

ok, so truly oriented OO system needs some equivalent of Ruby's method_missing() ? Is that's what you mean?

The structure I'm envisioning here is that every object has a public pass(m) method that takes an input message and has no other public methods. Every message is a legitimate input to pass(m), and every object can call the pass method of any other object. It's up to each object to decide for itself what it will do with any given message that it receives. If an object wants a response to its message it can identify itself in the message and then hopefully the object being passed the message will choose to pass a message back.

This seems exactly analogous to how a collection of webservers interact. They can only communicate with each other by tossing a chunk of bytes around hoping that the target knows what it's supposed to be doing with those bytes.

> and every object can call the pass method of any other object. It's up to each object to decide for itself what it will do with any given message that it receives.

It's a horrible programming model, though. We should be glad this is not how most languages work.

>It's a horrible programming model, though.

Have you ever actually programmed in this model? I wrote a correct RAFT library in ~300 lines of code that are extremely legible using this model.

> It's a horrible programming model, though. We should be glad this is not how most languages work.

This is not only about languages, but also environments. The argument Kay makes is that in order to make very large complex systems that are resilient, you need something like this kind of message passing. And indeed, this is how the Internet works too

The Internet works in hundreds of different ways, message passing, RPC, REST, store and forward, propagation, etc...

It's all made possible because of the protocol which is a kind of message passing. If a node somewhere on the Internet doesn't receive a packet for whatever reason, the whole network doesn't come crashing down.

I'm not sure method_missing is powerful enough to do message passing. It's method_missing, not method_received. You can't intercept message that already have a method defined for them. The sender chooses which method to call, and may call method_missing. It's not entirely up to the object how to respond to a message.

You can create a class which has no other methods than 'method_missing'. The explicitly declared methods are just useful syntactic sugar for how to respond to specific messages.

This sounds like the Event Bus (which can be implemented in an OOP language) to me.

Do people use that publish-subsribe model within a process rather than between processes? Any time I've encountered the 'event bus' model it's been for 'enterprise' (for lack of a better term) scale integration (i.e. gluing together applications across an organisation).

Sure, no matter what kind of programs. To get an idea, take a look at the example code at the end of the page: https://github.com/spinettaro/delphi-event-bus

4) Also, from a design point of view, the focus is a conversation of messages. Often the way people do "OO" is they focus on entities, not the messages and coversations.

By functions we generally mean "functions" which are stateless things which always return the same result to the same set of arguments.

Objects in contrast are not stateless things. They can know the history of the messages they have received earlier, as reflected in their internal state.

The distinguishing feature of message-passing an OOP I think is that the receiver of a message is free to decide how it will respond to it. Will it always return the same result for the same inputs? Not necessarily. I decides. It is the decider.

One way to think about Objects is as independent computers which get inputs from their surrounding and then somehow process such inputs. They might not return any result, but still do something useful. The point is they a PROGRAMMABLE objects, they are "live".

I have seen plenty functions on my career that are anything but stateless.

They may be called "functions" in the programming language you use but they are conceptually not "functions" as understood by the Functional Programming community and mathematics. This discussion about Alan Kay's "concept" of Object Orientation is much on the conceptual level, so we need to also consider that is the "concept" of 'Function'.

given everything thats been said here, i dont think comparing kay-objects and actors is a stretch. why do you think it is?

Because Alan Kay himself says that he drew a lot of inspiration from Hewitt's actor framework in developing object oriented programming.

Messaging is more general.

A message can be asynchronous. It can also be synchronous. You can persist messages, replicate them. Distribute them.

With a method call, the "caller" control what code gets executed. With a message-send, the sender politely asks the receiver to respond to the message. Is there code executed? Maybe. Maybe not. Maybe the message is ignored.

So something like Smalltalk is (roughly) at the minimal end for the capabilities of "messaging" whereas it is either at or slightly beyond the maximal end for "calling a method". The fact that there is a slight overlap makes things difficult, but not impossible, to distinguish.

In the articles on Software-ICs[1], they call the flavour of OO "object/message"

"Sending a message to an object is exactly like calling a function to operate on a data structure, with one crucial difference: Function calls specify not what should be accomplished but how. The function name identifies specific code to be executed. Messages, by contrast specify what you want an object to do and leave it up to the object to decide how."

I am not sure I'd agree with that 100%, and there are tons of ways to argue against it, but I think it's a start.

[1] https://blog.metaobject.com/2019/03/software-ics-binary-comp...

The way I've come to understand this is that a method is how an object responds to a message or messages.

I've come to learn that a lot of us think about creating classes first and the conversations second. Conversations and the messages in these conversations should, often times, tell us whether creating a class right now is needed. I really follow the philosophy of putting off decisions to lock oneself into an abstraction too early on (not enough information/data) and preserving flexibility. Often times, a struct is enough for carrying pieces of a related idea around. It isn't until its internal state or a need for it to have conversations with other objects do methods defined in a class make sense. Furthermore, classes without methods are just state containers.

I write some Ruby and some of the basis of my ideas come from Sandi Metz's book, "Practical Object-oriented Design in Ruby" where she describes message-passing.

"A message is a request for an object to carry out one of its operations. A message specifies which operation is desired, but not how that operation should be carried out. The receiver, the object to which the message was sent, determines how to carry out the requested operation. For example, addition is performed by sending a message to an object representing a number. The message specifies that the desired operation is addition and also specifies what number should be added to the receiver. The message does not specify how the addition will be performed. The receiver determines how to accomplish the addition. Computing is viewed as an intrinsic capability of objects which can be uniformly invoked by sending messages.

… A crucial property of an object is that its private memory can be manipulated only by its own operations. A crucial property of messages is that they are the only way to invoke an object's operations. These properties insure that the implementation of one object cannot depend on the internal details of other objects, only on the messages to which they respond.

Messages insure the modularity of the system…"

pages 6 & 7, [pdf] Smalltalk-80 The Language and its Implementation


I don't know what to say - that just sounds like normal method calls to me. Why do we need the term 'message passing' on top of that, which seems to additionally confuse people around synchronous or not?

To make this concrete, here are some things you might like to do with normal method calls, but cannot:

1. Make a logging proxy that wraps an arbitrary object. Any method you call on the logger, it logs the name of the method, and then invokes that method on its wrapped object.

2. Make a "tee" object. Any method you call on the object, it invokes on its N child objects.

3. Make a "replay" object. Any method you call on the object, it saves instead of executes. Later you can replay the saved method calls by redirecting them onto another object.

Message sends can invoke a method, but they can also go beyond that.

Consider JavaScript's "call" if you are familiar with that:

You say somefunk.call (someObject, ...someArgs)

The function executes in the context of 'someObject' meaning it can read and write its fields with expressions like 'this.color = "blue"'.

You could say that somefunk() is a method and you are making a method-call. But it is not a method of 'someObject'. Yet by calling it you CAN manipulate the fields of someObject.

JavaScript is NOT the type of "Message passing/Object-Oriented" system like Smalltalk-80. You can communicate with objects, meaning share information with them, meaning read and write their fields, by means other than their methods only.

FFS, this isn't that hard to understand. What is your language background?

The core difference is that messages are interpreted by the object itself. At runtime.

Here is a practical example where the difference is obvious. Let's say you have an object that does business logic. You want to log all of its operations without changing its logic or the code that calls it.

In a method call language, this is a pain in the ass. You have to subclass the original class, override every single method, and call the original logic + logger in every single one of those methods. And after all that work all you get is a lousy logger that's just good for one class and needs to be modified if you add something to the original class.

In a message-passing language, you would create a proxy class that passes all of its messages to the original object, then calls the logger. This implementation would involve writing only one method and would be applicable to absolutely any class/object in the system.

And this is just one out of many, many instances where message-passing allows for elegant solutions while method calls make you miserable.

For some definition "normal method calls" :-)

Obviously it is a definition of normal method calls in Smalltalk.

Even with Smalltalk, he well or poorly do you think setters/getters fit within that description?

It's a fuzzy one I'm not sure I understand either.

One thing though "tru OO" guys often complain of false message passing based on function call. Others here are hinting at asynchronous vs synchronous. Very likely to be what Kay meant.

Another thing to me, is that message interactions mean standalone data exchange. As if to design your interfaces to accept very small but complete objects as communication protocol. Rather than passing structures to be modified. I'm not sure it is even true at all but that's a feeling I got from Kay's talks and his biology metaphor. In a way it's a stateless bus connecting stateful objects (probably trying to minimize their internal state space as much as possible too).

Message passing is much more general than method call. If you look at Erlang/Elixir (making the assumption that Erlang processes are akin to objects): messages can be sent to any process running on any node on the network, the answer (if required) can come from a third process, messages can be stored for future processing. And since processes may be created or die dynamically, and messages sent over the network may timeout, there are built-in tools to handle all the failure modes.

Method calls in C++ or Java are rather primitive, in comparison. Though of course you can build/emulate all of that on top of either. After all, the Erlang VM (BEAM) is written in C...

At the risk of coming off as snarky toward Alan Kay, whom I respect tremendously, I believe the Kay-official answer to the question "what is Kay oop?" is "you just don't get it." Which for me has always raised a flag. We all know how to call methods, exchange messages, and raise and subscribe to events, yet there is always the implication that we haven't yet understood the real meaning of Kay oop, as if there's some gold nugget of knowledge yet to be gleaned. Pretty sure there's no nugget.

> I believe the Kay-official answer to the question "what is Kay oop?" is "you just don't get it." Which for me has always raised a flag.

There are at least two problems at work here:

1. The words that Alan Key is (rightly) using are overloaded, since other languages adopted the OO term, but interpreted them differently. This adds to miscommunication: The words sound familiar, but their meaning might be subtly or drastically different.

2. The advantages of OO as thought by Alan Key are hard to convey if not by hands-on experience. Debugging and developing programs in Smalltalk is really fun and effective, but I had to write my thesis in a Smalltalk company first to appreciate it.

This is an instance of the "blob problem" as termed by Paul Graham. Many Non-Lispers have a hard time appreciating the benefits of powerful metaprogramming, and reading articles about it won't change that. They might peak interest, but one has to play around some time with those concepts to gain a feel for what benefits and limits are.


I think the "you don't get it" refers to the knee jerk reaction of trying to map methods to messages.

One imperfect but compelling analogy is the HTTP GET request.

A GET request was originally envisioned as a request for a file at a path. `http://host/get/this/thing.data` The requestor decides what to do (give me this file), the host obeys.

Nowadays we understand GET requests as abstract. The requestor makes a request, but it is the host who decides what to do with it: forward it somewhere else, emit an error, dynamically generate the content, return a real file, serialize and cache the request, etc. The client doesn't know and can't know what the server does.

Notice in particular that hosts are expected to handle arbitrary URLs sensibly.

Java has no support for this, but Ruby and ObjC have these sort of facilities: forwarding, dynamic replies, etc. It's up to the libraries to make good use of it.

> A GET request was originally envisioned as a request for a file at a path. […] Nowadays […] the host who decides what to do with it: […] dynamically generate the content

Nope, a very early description of HTTP – years before 1.0 – already mentions that.


Actually Java has partial support for it via proxy classes.


>I still don't really understand what the big idea of message passing compared to a method call is.

For one, the binding happens much later.

Second, messages you can't handle can be automatically forwarded and delegated.

Third, it doesn't have to be local, the receiver could be a server in the other side of the world.

Redux action dispatchers are kind of messages...

I think saying "the binding happens much later" in 2019 is unnecessarily hard to understand. The binder in Unix has always been called a "linker"; the term "binder" was used in a bunch of PARC systems and consequently in Oberon, but since Unix-influenced systems (and I think DEC-influenced systems like CP/M) called it a "linker", the verb "link" has entirely supplanted the verb "bind" for this purpose, about 30 years ago, except when people are repeating the phrase "late binding" or "late-bound" without understanding what it means.

So I'll explain what it means here, because it took me a lot of years to understand that "late binding" was a specific concrete concept rather than a vague generality.

What we're talking about when we say "binding" or "linking" is the process of associating a reference — typically to a function that's being called, though occasionally to a global variable or something — with a particular referent, such as a particular chunk of executable code. When we say that some "binding" is happening "late", what we mean is that this linking is happening at run-time — at the time of the actual call, not even at program startup time, as with ld.so. Doing the linking (or "binding", as some people called it 40 years ago) at call time means that the same function call can invoke a different function every time you call it.

Of course, that's what method calls in languages like JS, Lua, or Python do, as well as function calls in languages like Scheme, JS, Lua, or Python. By contrast, calls to C++ or Golang methods are compiled into machine-code calls to particular machine-code functions, unless the methods are virtual or via an interface, respectively, and it's the linker that links (or "binds") these calls to their callees. (Unless they're inlined, blah blah.)

This, of course, gives you immense flexibility, at the expense of some performance and predictability.

You're right to distinguish message passing from late binding, as message passing is more general.

In Java, calling a method foo on an object compiles to different things depending on the class of the object. The compiler determines whether the object has a method foo, and compiles the call either into a direct jump or an indirect jump through the object's vtable. (Which slot in the vtable is fixed by the compiler.)

With message-send semantics as in Smalltalk or Objective-C, the compiler only knows the method's name, and calling a method foo compiles to the same thing irrespective of the object's class. It's the object's job to determine what foo means, and it will do the dispatch to determine which method to use, or if no method matches, call a missing-method handler.

That sounds like it's only an implementation difference.

You could compile Objective C with a profiling or global analysing compiler that compiles method calls to different things depending on the class of the object if you wanted to, and you could compile Java with an approach that compiles to the same thing regardless of the object's class if you wanted do.

But message passing is always described as a programming model, not an invisible implementation detail.

The difference in the programming model is that in a message passing language you can define logic around the handling of the message. Usually that logic is nothing more than passing to a method, but in some cases you may want to have other behaviour which you can write code to define. The other behaviour is what is absent from languages where methods are implemented as a jump as described by the parent.

The difference described in the post one level up is basically the difference between C++ virtual and non-virtual methods. It is not an invisible thing. Virtual methods behave differently than non-virtual ones.

No, it's a bit more general than that. It's the difference befween C++ virtual methods, pointers to which are stored in an array and always called by array index, and Python methods which are stored in a dictionary and called by name. In fact you can say that Python (unlike C++ or Java) implements message-passing semantics or is at least equivalent -- since in order to invoke a method, a caller's implementation needs know nothing about the method or its class except the method name, arguments, and a rederence to the target object.

Also implicit in message passing is a form of structural typing; objects that understand the same set of messages (called a protocol in Smalltalk-speak) are type-equivalent. A client class may delegate to any object that understands the messages the client sends to its delegate. One of the advantages this brings is all the crazy things you can do with #become: in Smalltalk. A perfectly valid way of responding to a message you haven't implemented is to construct an instance of a class that has an implementation for that message, and then become: that instance, swapping all references to yourself in system memory with a reference to the instance you created before passing the message onto the instance (your new self). There's no way to implement general #become: in C++ or Java because there's no guarantee the result will be sound type-wise.

The changes are subtle but they open up a world of dynamism that Java programmers don't have access to. Which is fine for Java programmers, who decided they don't want that kind of dynamism anyway, but there are programmers who work better shaping a live system rather than declaring type ontologies in advance. And Smalltalk is designed to work well with such programmers.

But that's what I mean - you could implement non-virtual methods using the same technique as virtual methods, if you wanted to. And using static analysis or profiling you could implement virtual methods as non-virtual methods.

Are you telling me that if I turn on a compiler optimisation my message passing program suddenly becomes a method call system?

So the difference cannot be what it compiles to.

You can not turn your virtual methods into non-virtual ones without affecting what your user-level program does. You decide whether you use virtual methods or not in your C++ program, not the compiler. They affect what your program does not simply how it does it.

Ruby and objective c are both strongly influenced by small talk. Yes. That’s on the right track.

I’d say messages are very loosely coupled compared to a function or method call.

In c everything is laid out by the linker. You can play games with function pointers, of course, but that is tricky.

Message passing isn’t really resolved till the message is actually sent. Late binding I think it’s called. Objective c gives so much control, you could add remoting at dispatch time.

I think Java’s methodinvoker is pretty much it for Java. You can implement an interface at runtime. But you don’t control dispatch quite as deeply.

Micro services approximate some of Kay’s ideas really well, but imho they give up too much control over dispatch. In ruby or objective c you can hijack the dispatch system in a way that’s really hard to do with micro services

In many languages, a method is simply a function that has some kind of binding to the object. In fact, some languages treat methods as nothing more than syntax sugar around passing the object as the first argument to a function. To call a method is to call a function and the technical details that goes along with that.

Messages differ in that you are only providing a description of what you want the object to do. The object then interprets the message and decides how to respond. That may mean calling a method that matches the description, but it may choose to do something else entirely. An object may even handle messages that have no corresponding methods at all.

The big idea is that an object can reason about the message before it handles it.

Implementation-wise, with message passing the operation to be performed is sent as a parameter to a message handler. A method call, on the other hand, is simply the invocation of a subroutine stored in a record field (a function pointer). The following article provides some examples in the Oberon programming language which I found clarifying:


I can send a "what time is it?" message to an object even if that object has no idea what that message means.

I can not call a method that isn't there.

Whether a method call qualifies as message passing in the OOP sense would depend on the language semantics.

In any event the caller must not be specifying the function to call (i.e. it's a virtual method in C++ terms) and it should be possible to send a message that is not understood without causing a fatal error (requires reflection in many languages).

> and it should be possible to send a message that is not understood without causing a fatal error

I do that sort thing in C via a SendMessage(msgType...) idiom. Instead of a bunch of functions, you have one function and a message Type and some data. If the destination doesn't know what to do with a message of type whatever it just spits that back.

This is basically how defunctionalization is implemented. I'm less confident about the "what if the destination doesn't know about the msgType you sent" issue, because defunctionalization itself is a whole-program technique where the destination context has to know what to do with your function call. But your solution (i.e. raising an error condition) looks reasonable enough.

I think messages are less bound to types than method calls, more late bound and more reified in some caseS.

I don't view there being much of a difference. I wrote about this a few months ago (http://boston.conman.org/2018/11/21.1) but the idea of it came years before.

Microservices probably exemplify this idea. And Erlang style languages/development.

It’s just an idea. It seems to work well in some situations, given the successes owed to Erlang and dev architecture like microservices.

People have their war stories about terrible experiences too.

Message passing in Java is basically joined at the hip with method calls, at least in the same thread. On new Foo(), you call .bar(), and then block until you get a response. Because you know .bar() is there before you ever call it, it's not super obvious what the message passing is.

Now imagine something like Erlang, which is a complete mind screw for someone used to writing sync code. The FooProcess can send the BarProcess an async message, but there is no guarantee the message got to BarProcess recieved the message, or that the BarProcess or that it will even be alive when the message arrives.

Maybe a good place to start is to read up on method_missing and message passing in Ruby.

Why would you ever want this kind of uncertainty in-process? If I have an object that lives in my process and I want to call a function on it, I rely on the compiler to tell whether that function exists or not. If it doesn't, why would I ship with code that will never work?

If we're talking remote calls, sure, there is never any guarantee that the recipient has that method, or that it's even listening anyway.

All of that to say that that whole discussion between function call or message passing is a waste of time and irrelevant in programming.

There must be a better way to discuss this topic than to just shit all over something that other people think is a good idea, no? How can you know with such certainty that you're correct and everyone else is just being stupid?

EDIT: Err, never mind. You're apparently inclined to shit on anything you don't like in most of your comments.

> All of that to say that that whole discussion between function call or message passing is a waste of time and irrelevant in programming.

I once felt that way. Then I learned Elixir, and the actor model is my favorite thing of all time. It has amazing abstractions for dealing with concurrency.

> Maybe a good place to start is to read up on method_missing and message passing in Ruby.

I wrote a PhD on method_missing, and I still don’t really see how Ruby is message passing rather than just normal method calls.

Consider that the call-site in Ruby in the general case does not have any guarantees about how the recipient of a message will act on the message.

As an implementer, as you of course know from the amazing optimizations Truffle Ruby does, there are certainly lots of special cases where you can aggressively decide from an implementation point of view that you know how the recipient will act. So in practical terms a lot of code that uses systems that conceptually can do message passing will in practice in many - or even most - cases act exactly the same as a pure method-calling system.

But the conceptual difference is that in a message passing system, you can not do so for the general case before execution starts, because there is the chance that the treatment of messages change dynamically at runtime. E.g. the "fun" case of a program that eval()'s a user supplied string that redefines a method, to take the extreme case. "pry" and "irb" are good examples...

To me at least, message passing and "extreme late-binding of all things" relate to systems that conceptually allow deferring the decision of which piece of code will get called until the moment a message is passed, whether or not a specific instance actually makes use of that flexibility, while a "method call" implies that you can know in the general case by static analysis which method will be invoked, or at the very least which of a small, constrained set of methods will be invoked.

Of course you can always emulate one with the other, so the boundaries gets fuzzy.

EDIT: Put another way: You can implement message passing in any system by defining a send(object, ... args) function that implements dynamic lookup and allows the program to modify the lookup at runtime. One step up is to implement a "send(...args)" method on a class in a C++/Java style OO system. So any system can have message passing. But the difference in whether or not we consider a system message passing is largely a question of whether or not the system provides syntactic sugar to make that easy and whether or not it is considered idiomatic use.

You're the Truffle Ruby person, yeah? Hi!

Re: method_missing, I supoose it's not as easy to understand as I thought, which is to say if I understood it better, I could explain it better. The best I can do is this...

1. A typical method call in Java means "I pass a message (args) to a method on some object, and block until I get a response", which usually means building of up stack of blocking calls waiting for a response". The message passing is sending the message, which in Java is tightly coupled to receiving the response.

2. In Erlang, you might have a process (think of it as an object for this example [1]), that sends a message to another process. The sending process doesn't need to wait for a response. Erlang does have function calls that feel pretty much exactly like method calls in Java, and then there is message passing between processes, and that separation makes the differences more clear to me, which is not to say I understand things correctly.

3. method_missing in Ruby is kind of like sending a letter in the mail to Santa Clause. There is no physical address for Santa (aka an undefinded function), but someone at the post office looks at the intended destination (SantaClause.mailing_address) of the North Pole and message itself ("I have been good, and want a Nitendo Swith for Xmas"), and determines how to respond (method_missing) to the message.

I do not know if this explanation is any good, so please be kind :) Doing my best!

[1] this article might explain things better than I can... https://blog.noredink.com/post/142689001488/the-most-object-...

I think the idea of message passing is a lot like "pure" functional programming. What matters isn't simply that the language does message passing but that the programmer can count on all communication being by message passing.

Then you have a broad "polymorphism" that guarantees all the parts of the system just take messages and do things. Obviously, in practice that could be implemented by function call but having the confidence that programming units are only going to receive messages makes it easier to think about the system - hopefully.

I think this quote from a blog post [0] gives a pretty clear from the Objective-C perspective:

  I was thinking recently on the idea of “sending a message” versus that of “calling 
  a method”. Those familiar with the capabilities of the Objective-C runtime understand 
  that there is a difference.

  Most compiled languages refer to methods and functions internally as offsets. The 
  offset indicates that if one were to move a certain number of spaces in memory 
  from a designated starting point, then we would find the beginning of the 
  instructions that correspond to the desired function.

  This has the advantage of being very fast. The starting point is known at compile-
  time, and so to find the appropriate section of memory takes just a couple of 
  instructions. However, it has the disadvantage of being inflexible. This 
  is where most compiled languages differ from Objective-C

  The Objective-C runtime maintains a list of all the methods and functions it knows
  about. The list has two components per entry: the name of the method (known as 
  the method’s “selector”) and the location of the method in memory.

  When an object attempts to “call a method”, it is really behaving quite differently. 
  When the code was compiled, the compiler translated the code [anObject 
  doMethod:aParameter]; into objc_msgSend(anObject, @selector(doMethod:), 
  aParameter); (basically. The actual behavior is slightly more complex but is all 
  explained in the documentation).

  The objc_msgSend() function does a dynamic lookup. It knows the name of the method 
  it’s supposed to find (@selector(doMethod:)), and so it goes and looks up in its big
  list where it can find what it’s supposed to do next.

  This allows us to do some really interesting things. For example, we can modify this 
  list whenever we want. We can swap out values of the list, so that instead of 
  selector A executing the code for A, selector A might instead point to the code
  for B. The runtime allows us to do some very powerful stuff.

  The disadvantage is that this is (as you might imagine) slightly slower than directly
  jumping to the appropriate section in code. However, the difference is 
  minute. Each message send takes just a couple nanoseconds longer than a regular 
  method call.

[0]: http://davedelong.tumblr.com/post/58428190187/an-observation...

I'm not Alan Kay, nor do I pretend to think like him, but having done some experiments on what I think he was suggesting I think the distinction between message passing with a method call is a bit of a red herring. Message passing can obviously be implemented with function calls. When he's talking about "messages" he's not referring to implementation. He's referring to constraining where data can be accessed when doing a computation.

Because a lot of people (including me) have a Simula based view of "object oriented", we tend to think of objects as data structures with functions attached to them. Alan Kay had a different view, as far as I can tell. He viewed objects as being a collection of abilities. You could invoke these abilities by sending the object a "message". How you send that message is irrelevant. The important thing is that the object is not a collection of data, but rather the object contains the program state necessary to provide the ability (and nothing more). One of the things he talks about (I can't remember if he does in this specific email exchange, though) is the idea that once the data is inside the object, you can't actually access it any more. It becomes a detail that the programmer doesn't have to worry about.

As an example, it's tempting to look at a point on a 2D plane as a tuple containing an X coordinate and a Y coordinate. However, let's forget about the data and instead think about the actions that you might want to do with a point. You might want to construct a vector from it (normalised from the point 0,0). You might want to translate it (by giving it a vector). You might want to rotate it around another point at a certain angle, etc, etc. From the outside perspective there is no reason to access the X and Y coordinates. We don't have to care about what a point object contains -- we only have to care about what messages it responds to.

But how is that really different that having a struct with X and Y and a bunch of functions attached to it? I thought pretty hard about this and one of the things I thought about was what if we approached this in a more functional, rather than imperative fashion. Alan Kay was, after all, coming from an FP background.

For example, let's say that we have a collections of points and we want to "indent" them by pushing them all to the right based on the X coordinate of an existing point. Our "normal" approach would be to get the X coordinate of the existing point and then add that value to the X coordinate of all the other points.

Instead, though, what if we had a method on Point that accepted a function. The method would call the function and furnish the x coordinate (we could call it "with_x", perhaps). Now we can use that method to construct a Vector, setting the x coordinate of the vector to the x coordinate of the point. We could then map over our collection of points and translate using the vector.

The difference is subtle, but I think it's important. With a struct, we essentially export program state out of the Point object. With this more functional approach, we run the function within the Point itself. In other words, we are asking the Point to construct the Vector itself, by sending it a message.

I think this is what Alan Kay meant when he talks about message passing. It's not about the mechanism of the passing of messages, it's about where the computation is performed as a result of that message passing. In our example, the X coordinate of the point never "leaks out" of the point. We can use it in the context of the point, but we can just grab it in the middle of some other computation.

Is that distinction important? I've tried to write some non-trivial code in that style and I really liked how it came out. I'm not sure if it's "better" than doing it another way, though. I would have to spend considerably more time with that style of programming to say for sure.

Isn't a crucial difference that a method call may have a return value, while message passing never does?

Now method calls and function calls: to me they're the same thing. After over 20 years of Java, I still stumble over that terminology. They're function calls, OK?

I beg to differ. Functions are stateless things they should always return the same result for the same inputs. Methods on the other hand have access to the state of the object whose method you call and result of the method-call can depend on not only its arguments but also on the state of the recipient-object.

Of course depending on the language a keyword like "function" need not create something that always returns the same result for the same arguments. JavaScript for instance.

A method is "associated with" an object and has access to the private state of that object. Of course depending on the language that might differ but we are here talking about what is an "OO language".

The difference between a function that depends only on its arguments, and a method which depends only on its aruments and the object is trivial, which is why the distinction bothers me.

Another often overlooked difference between the two is that many discussions compare a function with immutable arguments, with a method operating an a mutable object. And then the method is said to be not a function because of the object's mutability. But there is no need for the object to be mutable or for any arguments to be immutable.

Perhaps the most substantial difference is a method's dispatch on the object type. But if we're going with Kay's definition, then that is not an essential part of OO, and again, you can imagine dispatch in function invocation too.

To summarize: When I start taking apart the different aspects of function invocation and method invocation, I have a very hard time seeing the difference.

"Under the hood" most language implementations just implements methods as functions with the object as one of the parameters and privileged information about how to interpret the object structure.

This illustrates the difficulty of differentiating these things, as pretty much everything we do is "just" syntactic sugar. But of course that syntactic sugar often has substantial impact on how we think.

People call Objective C and Ruby message passing, but they have return values.

Nobody's definitions here seem consistent or applicable to implemented systems! Crazy how much a well-known idea can be so hard to pin down.

Smalltalk messages also return values.

And here I was thinking, I was the only one who was utterly confused by Apple‘s docs about Objective-C and calling everything slightly different than I was used to.

Also, isn’t it odd that your replies all say something different what message passing actually is?

This talks about primarily asynchronous message passing, which doesn't seem to be what anyone means in practice.

Actually that is how NeXT implemented distributed objects.


That document says that their messages were synchronous by default, and that asynchronous was something extra built on top just for distributed objects, not how they worked normally and not part of a uniform message passing interface.

Yes, it was up to you to change the behavior on doesNotUnderstand: methods, but the feature was available, just a matter of defaults.

Technically there is no difference. In a closed universe both compile down to the same.

The difference lies in the approach to software design[1].

[1] If I could be more specific, Mr. Kay would have been so 30+ years ago.

I like to see message passing as method calls as conceptually about who is responsible.

A method call is "done to" an object. The caller decides which code is invoked. It does so with information about the object or its class, but ultimately the object has no direct say.

Message passing on the other hand conceptually grants the object the role of deciding what happens to a message.

Message passing is often implemented in terms of method calls for the simple cases, but an object has the ability to dynamically define how a message should be handled based on state that may be known to the object but not the caller.

A typical Ruby example (and Ruby is a good example - it "stole" most of its object model from Smalltalk) is Ruby ORMs like Sequel (same applies to most Ruby ORMs) that dynamically define accessors for columns etc when connecting to a database and dynamically querying the database for the schema.

In Ruby, method calls is a veneer - you can define a method that looks like it's just like a C++ method call, decided at the callsite, but at runtime the receiving object can be redefined at any point, up to and including changing which methods the object responds to on an object-by-object basis, or defining "method_missing" so that what looks like method calls are entirely dynamic.

> It it supposed to be remotable? So are method calls.

I'd say there is a difference in degree of complexity in remoting. In a message passing system you typically do not need to have any kind of definition of the interface available client side. E.g. Drb for Ruby lets you expose any Ruby code over a network connection this way.

E.g. here is a real example, using the "pry" Ruby REPL. Server side:

    $ pry
    [1] pry(main)> require 'drb/drb'
    => true
    [2] pry(main)> drb = DRb.start_service("druby://localhost:8787", self); nil
    => nil
    [3] pry(main)> def hello; puts "Hello world"; end
    => :hello
    [4] pry(main)> Hello world
    [4] pry(main)> 
    [5] pry(main)> def hello; "Hello world"; end
    => :hello
The above exposes "self" which is this case is the "main" object of that Ruby instance, but you can pass it any object you want, and all public instance methods on that object will then be remoted, and any objects returned from those methods will be remoted dynamically as needed.

It then defines the "hello" method on main after the server has been set up. Doesn't matter when the methods are defined, as long as they're there when you call them.

The redefinition of "hello" in [5] happens after I've called hello the first time from the client (the "Hello World" in [4] was triggered by the client. The redefined version returns the string to the client instead of printing it on the server:

    $ pry
    [1] pry(main)> require 'drb/drb'
    => true
    [2] pry(main)> client = DRbObject.new_with_uri("druby://localhost:8787")
    => #<DRb::DRbObject:0x000055828909f220 @ref=nil, @uri="druby://localhost:8787">
    [3] pry(main)> client.hello
    => nil
    # Here we redefine "hello" on the server.
    [4] pry(main)> client.hello
    => "Hello world"

"I'm not against types, but I don't know of any type systems that aren't a complete pain, so I still like dynamic typing.)"

That's probably more controversial here than his views on OO.

Arguably, it was true back in 2003 when this was written. There has been a lot of improvement in type system usability since then.

A lot of us still feel that way. The trade off is still there, and in many situation I still don’t find types to be worth the trouble.

> The trade off is still there

Not really, in my opinion. Type inference especially has come a long way. If I offer you a choice between language A and language B and claim that their syntax is nearly identical and you'll use essentially the same number of keystrokes to accomplish the same task in both, but language B can inform you about mistakes you've made at compile-time instead of waiting until run-time... which one are you going to choose?

While dynamic languages afford some degree of freedom, that freedom comes at a cost. The tradeoff used to be that safety required lots of extra writing (e.g., Java). Now that this isn't the case... the freedom of dynamic typing can be more harmful than helpful.

(I still use Python all the time, so don't mistake me for a static-only zealot of some sort. But I think the landscape of type systems has changed a lot since Kay made these claims.)

There's definitely still a cost to static types.

Types can be thought of as a way to defensively program - encode what you can to avoid runtime errors. At minimum this means you have to think about your constraints - we may both agree that that's a fine tradeoff, but it's a tradeoff nonetheless.

Erlang, an actor based system, does not do this. Instead, it assumes that even if you added types youd run into failures and any reliable system should spend its efforts not trying to avoid that but to deal with it.

Erlang allows you to instead encode failure modes in an extremely resilient way (supervisory trees) and to trivially move code across networks to avoid hardware failures.

Languages like Python, in my opinion, are the worst of both worlds. Python encourages lots of shared mutable state but, until recently, offered very little static analysis. Time was instead spent on testing code - something that I do not believe it does any better than Erlang or a statically typed language.

To me, the answer is sort of... why not both? We can use actors and supervisory trees and static types. As an example, I write Rust services that execute on AWS lambda in response to queue events. I get state isolation and all of the good bits of the actor model, and static types.

Pony is a more fine grained solution, offering static types and in-process actors.

Types aren't just for catching type errors, they're also a way of defining and enforcing at least parts of the programming contract. Most type systems are poor at this, but it still goes beyond just catching runtime type errors.

The time you save from writing a dynamic program is time not spent on defining that contract. You will eventually be paying for that when the first major refactor comes - if it comes, that is. You will pay for it with extra test coverage, if that's in the budget. Or you will pay for it by re-writing everything.

Those may all be worthwhile tradeoffs, but in the long run, I think static types win out.

> Types aren't just for catching type errors, they're also a way of defining and enforcing at least parts of the programming contract.

I don't disagree. To be clear, I'm a type safety zealot.

> You will eventually be paying for that when the first major refactor comes - if it comes, that is.

This is fine but not relevant. Erlang, for example, just assumes you'll fuck up the refactor. Actors are isolated interfaces and you can not share state - so a failure in one actor can not impact other actors directly.

It's fine to say that static types are better but if you read about Erlang you may find the approach very compelling - Erlang's managed to provide the basis for extremely reliable systems, without types.

And as I said, it is not either or. You can build power supervisory structures and statically type your code if you like, but no languages really do it, so you have to reach outside of the language (like using a distributed queue/ microservice approach).

> It's fine to say that static types are better but if you read about Erlang you may find the approach very compelling - Erlang's managed to provide the basis for extremely reliable systems, without types.

I have yet to see some convincing proof of that, besides that Ericsson router from 20 years ago that ended up being rewritten in C++.

Also, even if it is true like you say that

> Erlang's managed to provide the basis for extremely reliable systems, without types.

This still doesn't prove that there are no languages that can do a better job at it than Erlang.

99.99% of extremely reliable software today runs on non Erlang: C, C++, Java, you name it.

Finally, in my experience, writing supervisors in Erlang is just as painful, and if not harder, than writing resilient code based on exceptions in Java or C++.

I'm not advocating for Erlang over type safety. I personally prefer a type based approach. They are also not incompatible.

That said,

> I have yet to see some convincing proof of that

There is no proof, but I can point you to research.

pdf warnings: https://jimgray.azurewebsites.net/papers/tandemtr85.7_whydoc...

This paper is fundamental to reliability, and describes two primitives for building reliable systems - transactions, and the "persistent process" (spoilers: it's an actor).

And here's Joe Armstrong's thesis. The first 3 chapters are quite relevant and will point you to further research


> besides that Ericsson router from 20 years ago that ended up being rewritten in C++.

This is missing some important information. Erlang is still used at the control plane/ orchestration layer, for exactly the reasons I've described.

> This still doesn't prove that there are no languages that can do a better job at it than Erlang.

Didn't say otherwise.

> 99.99% of extremely reliable software today runs on non Erlang: C, C++, Java, you name it.

Sure, but who cares about Erlang? The real money's in isolated persistent processes aka actors, and I bet most reliable software is built on those, whether language provided or not. See AWS's cell based architecture, which is just the actor model with discipline attached. Or all of microservice architecture.

> Finally, in my experience, writing supervisors in Erlang is just as painful, and if not harder, than writing resilient code based on exceptions in Java or C++.

That's ok.

"I have yet to see some convincing proof of that, besides that Ericsson router from 20 years ago that ended up being rewritten in C++." Control plane for 80% of mobile in the world is Erlang. Core critical infra @ Goldman is Erlang. Control plane for 90% of internet routers is Erlang.

That's neither proof, nor convincing. Just claims.

Do you have any statements from these companies that they do indeed do that?

I'm not talking about claims from Erlang web sites, but actual companies using Erlang confirming your claims.

> Types aren't just for catching type errors,

Yes they are.

> they're also a way of defining and enforcing at least parts of the programming contract.

Violation of those parts of the contract are type errors. (And all proper type errors—those that aren't artifacts of the type system and it's incorrect use or impedance mismatch with the project—are violations of the programming contract.

> The time you save from writing a dynamic program is time not spent on defining that contract.

No, you can still define the contract when using a dynamic language, and still often save time compared to a real static language.

In the ideal case, sure, a static language would add no additional overhead to this, but that's an unattainable ideal.

I think you're maybe overly focused on the "compile time type checker" bit of static typing.

The contract definition bit is secondary, but useful. It's also at least somewhat separable from the type checking.

The best example I can think of is to compare the duck typing that is common in many dynamic languages with the formal interfaces that are more popular in static languages. With duck typing, you basically look for a method with a particular name in an object, and then, having found it, simply assume that that method is going to implement the semantics you expect. That works surprisingly often, but it is a bit rough-and-ready for many people's tastes.

With formal interfaces, you have a clearer indication from the type's author that they're consciously planning on implementing a specific set of semantics. They could still be doing it wrong, of course, but it's at least trying to be more meticulous about things.

I also think it's worth pointing out that static typing can be useful as a performance thing. Pushing all those type checks into run-time does have costs. There's the extra branch instructions involved in doing all those type checks at run time, and there's the extra memory consumption involved in carrying around that type information at run time.

(It's also true that this aspect is very much a continuum, especially with regards to the performance considerations: Many dynamic languages have JIT compilers that can statically analyze the code and treat it as if it were statically typed at run time, and any ostensibly static language that allows downcasting or reflection supplies those features by carrying type information and allowing for type checks at run time.)

So the only message passing between your rust actors is via SQS or such?

Cos in this paper they suggest actor coordination is a problem for AWS and that using queues for message passing is slow.


Very interested in your experience cos I’m interested in trying this pattern.

> So the only message passing between your rust actors is via SQS or such?

It's S3 -> SNS -> SQS -> Lambda

This gives me:

* Persistent events on s3, for replayability

* Multiconsumer for SNS

* Dead letters, retries, etc via SQS

Maybe from a latency perspective this is slow, but my system can tolerate latency at the level of minutes, so I'm really doubtful that my messaging system will matter.

Most time is spent downloading payloads in S3 as far as I know. I batch up and compress proto events to optimize this.

I haven't tried scaling out my system but I'm confident that message passing is the least of my concerns.

No this completely misses the point! The point is the development model, not the language syntax.

Static types assume a "static" phase: an edit-compile-run cycle, with bright lines between each stage.

But as a Smalltalk programmer, you inhabit the system and build it inside-out, while it is running. For example, during development it is routine to query live objects. The lines between the edit-compile-run phases disappear.

Static types don't make sense in the Smalltalk model because there's no static phase.

I agree with that. The interesting aspect of dynamic languages is that it's a living program, you can interact with, it can interact with itself and you could even effectively make it an entire new program without ever closing it like the Ship of Theseus. Smalltalk, Lisp Machines and the Erlang Virtual Machine were pushing this idea.

Which is why I feel that a dynamic language that doesn't focus on the interactivity, like having a great REPL based development cycle, hot swapping capabilities and strong code as data utilities kinda misses the point in the trade-off between static and dynamic, sacrificing safety for too little. Jupyter notebooks seem to have recently brought a little of the first to the mainstream at least.

Static types assume no such cycle! Milner's MLs, which introduced the hugely influential simple type theory to functional programming in the late 70s, of which Ocaml and Haskell are derived, supported global type inference, and were very "lispy". They were first used for interactive theorem proving, where you spent all your time spinning around MLs REPL, updating your global environment where your global bindings have a nice static type, and when you had finished for the day, you saved your image (you save-lisp-and-die'd).

This is still a way to use Ocaml today, and Poly/ML, which is the standard ML used for the Isabelle theorem prover, makes heavy use of an image based model.

But there's more to type systems than static types - gradual / dynamic typing is a thing. Eg:


> If I offer you a choice between language A and language B...

Realistically you don't have the choice between hypothetical buffet languages A and B, you have a choice between Java or C# and Python or Javascript and maybe a handful others. Those are the type systems you will have to deal with, most likely.

I'm sure the type inference in Haskell (or similar) is fantastic but virtually nobody uses those languages so those benefits are purely theoretical for most programmers.

By the way, static type inference in Javascript also has gotten pretty good and in many cases it can catch all of the bugs that an ordinary type checker would catch.

> Realistically you don't have the choice between hypothetical buffet languages A and B, you have a choice between Java or C# and Python or Javascript and maybe a handful others. Those are the type systems you will have to deal with, most likely.

> I'm sure the type inference in Haskell (or similar) is fantastic but virtually nobody uses those languages so those benefits are purely theoretical for most programmers.

I think this argument would hold more water if the dynamic language under discussion wasn't Smalltalk...

> Not really, in my opinion. Type inference especially has come a long way.

Does dynamic typing also make writing tests easier?

For example when writing tests in Python it's relatively trivial to mock out specific methods or functions. However, when I was writing tests in Go, I realized to test things which required mocks I would have to change all the type signatures to use interfaces instead of structs, and that required writing a new interface as well.

I admit I am not entirely sure whether this difficulty stems from static vs dynamic typing or something else.

Honestly this is why I think dynamic languages with optional type systems tacked on are the best.

You get a huge amount of power in your type system (Python is actively working on dependent types and supports variance first class), but for code where that's a burden (test cases), you can drop it.

Maybe powerful macros that let you copy an interface (Python's mock.patch with autospec=true, but better) would solve the same problems, but you don't generally have that option.

Go is not very impressive when it comes to type system power though. My bet is that if you wrote your statically typed code in Crystal instead, then you wouldn't have any problems creating your mocks. There is nothing in the concept of static typing that forbids duck typing - it is just that most type systems are weak enough that it doesn't work.

Does dynamic typing also make writing tests easier?

Maybe it's dynamic typing what makes writing (so many) tests necessary to begin with.

Nope, at least not in general.

Tests tend to test values. Types generally don't catch wrong values, so if you implement add() as subtract(), the signatures are the same, but you still get a wrong result.

However, tests of values incidentally also test the types, because values have types and for the values to match, the types must also match.

So if you write the tests that you need to write anyhow, the ones for the values, you have also tested the types, without extra effort.

Tests tend to test values.

Tests check the values, but that doesn't mean it's the only thing they test. When you say 'incidentally' you are conflating the goal with the means.

Anyway, my point is that dynamic typing creates a whole category of errors and make much easier to make several other kinds of mistakes and, what is worse, delay the moment in which they're obvious.

> but that doesn't mean it's the only thing they test.

That was exactly my point. In checking values, they also check types, without any extra effort.

> creates a whole category of errors

It doesn't create them. It just doesn't prevent them (type errors) at the language/compiler level. But since you have to test the values anyway, that doesn't really matter.

> delay the moment

Not really. Dynamic languages make possible environments that compile+run before the static compiler is done compiling.

That line of reasoning is the same as certain persons that "sell protection": they want you to pay to solve a problem that was much smaller before they appeared. Of course they try to convince you that the problem was already there.

Not really. Dynamic languages make possible environments that compile+run before the static compiler is done compiling.

You're confusing dynamic types with dynamic compilers. Or maybe you mean speed? There are very fast compilers, I assume your reference is C++ that is a dog. Anyway that's a red herring, because the delay I was talking about is logical, not about the implementation.

> before they appeared

Hmm...static type checking is something you add to a programming language, so you seem to have your causalities mixed up a bit.

> confusing dynamic types with dynamic compilers

Nope. Most of the compilers for languages for different kinds of rich static type systems are slow, and getting slower. Yes, C++, but also Haskell, Scala and Swift. Some have Turing-complete type-systems, so you actually have no guarantee that compilation will even terminate, never mind the time it will take.

An almost bigger point is incrementality. A language/system like Smalltalk lets you compile a method at a time without stopping the running program, and you can do an exploratory step that is allowed to be inconsistent without having to fix all related code.

The development experience is something that has to be seen and lived to be believed.

> Most of the compilers for languages for different kinds of rich static type systems are slow, and getting slower

It's not the types that make Haskell slow to compile, as you can verify but running a type-check only pass using -fno-code. You will find it run an order of magnitude quicker than a full compile.

> Type inference especially has come a long way

Maybe you know this, but type inference goes back to the 1970s. It's taken this long to start seeing it in mainstream languages.

Similarly, pure functional strongly statically typed languages with type inference, pattern matching, algebraic data types and non-null types have been around for literally decades yet we're still stuck debating about the so-called benefits of dynamically typed languages.

I'm going to choose the dynamic one.

It's a risk and reward. You're giving away seconds or so on every line, for what might take a little debugging if one is encountered.

I'd rather get the code written and the problem solved than make converting a string into a date the problem to be solved. There are plenty of problems figuring out if something is a date or not, but it's worth it.

And then you have charts like this https://rollbar.com/blog/top-10-javascript-errors/

9 out of the 10 most common JavaScript errors are around types, hence why such a large push to type JavaScript (TS, Flow, Elm, ReasonML)

If things have fundamentally changed, what is that a type-inference based language that's fast but nearly as easy as Python?

Depends on what exactly you mean by “type inference”, “fast”, and “easy as Python”.

Smalltalk has been pretty fast for ages (without inference, and may or may not be as easy as Python to you).

JavaScript JITs can use a form of type inference, so you get the benefits of speed with no change to syntax. Again, it’s pretty fast, and “easy” is in the eye of the beholder.


don’t find types to be worth the trouble

This is the part I always fail to grok in discussions about type systems. For me it is just the opposite: I typically don't find dynamic typing to be worth the trouble, so I fail to understand this sentiment when going the other direction.

Maybe it's because my degree is in mathematics (lots of proofs written out long-hand), and my first language was Java? I have since all but abandoned many of the ideas that Java so rigidly enforces, OOP among them, but static typing just has so many advantages for me that I have no desire to get rid of it.

One thing I've noticed in some recent work is that the static language I'm using (Go) allows me to both be lazier and get more done than comparable work being done in dynamic languages. I'm not reading or writing tons of documentation (I'm reading other people's code directly, or just looking at the types involved and moving on). I'm not constantly handling byzantine runtime errors or having to memorize the arbitrary intricacies of any over-bearing frameworks in order to be productive. I don't need to be on guard quite so much, I lean on static analysis. In general I just don't need to hold so much extraneous crap in my head, it's all spelled out right there in the code and I'm free to think about larger concerns, like the best way to solve the problem at hand. And when I circle back around to some 10-20k line library I wrote 3-6 months ago, I can read, understand and refactor parts of it quickly.

People often talk about developer comfort and speed being improved by dynamic languages, and maybe that's true if Rails solves all your business needs, but for many of the tasks I've work on, that just hasn't borne out over the past 10 years. I have no doubt that dynamic languages seemed a breath of fresh air after fighting with C/C++/Java, but to my eyes Ruby/Python/Javascript etc were always a bridge too far for a lot of tasks.

It's not worth the trouble with "write once" programs that you never refactor.

I think the problem is that mainstream statically typed programming languages come with terrible or non-existing metaprogramming facilities to make up for type constraints.

Rust supposedly has good metaprogramming but then it also has a type system that is so obnoxious it's not "worth the trouble in many situations".

> then it also has a type system that is so obnoxious

Care to elaborate?

There has also been improvement in dynamic typing. For instance EcmaScript-6 default arguments mean that the programmer can effectively indicate the types of arguments a function or method is to be called with.

From the programmer's point of view default arguments are a cheap way to achieve some benefits of "type declarations". And they are especially cheap when you consider that in return you get - default arguments. They are a win-win.

Consider also the discussion about Smalltalk preserving the system integrity by allowing only methods manipulate the internal state.

This means that anything that gets modified inside an object is done by methods of that object. That means that it is easy for you to add dynamic type-checks whenever you are writing an instance-variable. That goes a long away towards alleviating type-problems.

Whereas in a language where objects' properties can be modified from the outside there is no way to intercept such modifications and type-check them. EXCEPT in recent developments such as ES6 you can define SETTERS which get automatically called when you or anybody tries to set a value of a property. That means you can then put the type-checks there.This just as an example of improvements in dynamic type-checking.

Now ES6 is great, but it does not force you to create such setters. Whereas Smalltalk does force you to create methods if you want to modify property-values.

This great feature is somewhat diminished in value by the fact that (in Smalltalk) many methods can write the same instance-variable so it sometimes becomes tedious to figure out which method including inherited ones wrote a delinquent value to the instance variable. But then Smalltalk IDEs allow you to browse all methods which read or write a given instance variable, which helps in such a situation. Then you can think about refactoring it so that really there is only one setter for a given instance variable.

My feeling is that it is not about improvement but democratisation : Ocaml has been around since 1996 and its type inference is stellar. I was shocked when I first touched C and Java after two years of Ocaml.

pascal, cpp and java were very very bad typing acrobatics.

smalltalk goes by by having a sensible topology of classes that can encode a lot of natural information about processes (errors, restarts etc)

I mentally categorize these discussions of "true OOP" in the same class as "RESTful" API discussions. Nobody is quite sure what they're even arguing about, so the discussions go around endlessly in circles.

Oh, it's much easier to define RESTful than OOP.

RESTful means only that the thing you send has an envelope with enough metadata about what you're doing that middleware can reason enough about it to do things like caching and routing, and also, critically, the methods ("verbs" if you wish) in the envelope correspond only to things a browser can trivially figure out to do based on HTML. But really, in the end it means HTTP, and HTTP is basically like a filesystem protocol with fewer operations (e.g., no RENAME) on the one hand and more (content encoding conversions) on the other.

The moment you need to compose operations into larger, atomic even, transactions, RESTful becomes a pain and the workaround is to treat it like an RPC protocol. And also, on the browser side, you leave HTML-only land and now need JavaScript to compose those transactions.

However, as you'll see in the responses to this, you're actually completely right: there is no true scotsman.

Alan Kay is Turing award winner and one of the main pioneers in OOP. I don't know how he doesn't know what he talking about.

Alan Kay probably does know what he's talking about. The problem is that nobody else knows what he's talking about.

I stated something similar in an older post. Essentially we all know who he is and have a lot of respect for his amazing accomplishments. He even has some lectures that are awesome. One that I've watched a few times goes over the massive technical debt we're building as a society. The Xerox Parc team built a computer with an OS, language, editor, GUI...etc etc in probably 1/50th (I'm completely making this number up to paraphrase the point) of the code of just Microsoft Word. So we're using overcomplicated systems and tools and it isn't necessary. On the other hand, there are other papers, internet discussions, and talks of his where I think the target audience is someone who has an advanced understanding of computers, biology, art, philosophy, pedagogy, psychology and has had the time to piece it together and I'm completely lost as to the point he's trying to make and it is a shame. I wish those talks came with a ELI5 where I could get the gist before diving in further. Perhaps I haven't earned the knowledge to participate in those discussions yet.

Maybe the cause of OOP success isn't what Alan Kay intended it to be, or Meyer or any other language creator or theoretician.

IMHO the great thing of OOP was usability.

These intentionally vague terms do have a purpose though - their inventors and early-to-mid adopters hype them up, write a book or two, sell few success stories and that's it - they can now sell their services to the startups and large enterprises alike; then others start cargo-culting and vicious cycle goes on.

This has happened with pretty much every vague thing in the tech: "OOP", "Agile" (SAFe and friends), "RESTful", "serverless", etc...

I can see REST way better than Struts era style of backend services when writing complex web apps.

Smalltalk had to cut corners with messaging due to the limited processing of the time, nevertheless it has fully reified messages; one can express the sending of messages between objects.

Smalltalk inspired Erlang, which is the fully-asynchronous messaging/independent threads of execution part of OO only.

Self did OO without inheritance (composition by prototypes).

Whenever you are unsure if something is truly OOP just answer one simple question:

> Is everything an object?

If the answer is no, you don't have true OOP in front of you. Otherwise, you might want to dig a little deeper.

what's the name for concepts like these ?

There is something of a "No true Scotsman" that comes up with both of those.

I wanted to get rid of data.

I wonder if the OOP "style" of obfuscating program behaviour by using more code (class hierarchies, overridden functions, etc.) instead of data (table lookups) could be attributed to him.

The opposing viewpoint is discussed at https://news.ycombinator.com/item?id=4560334 and I've personally found from experience that expressing a set of conditions and decisions as one table is far easier to understand and modify than trying to model it with multiple interacting objects. The latter has much more cognitive overhead.

Quite the contrary: Everything should be data. Even the lines of code. The values of variables and constants.

We can't have AI if we can't rewrite the code. It's super hard to rewrite 1's and 0's, but if our lines of code are the data, then all we need to do is update a row in a database to rewrite the program.

We can automate this. Even now our code is data. We could automate a program to upload a new source code file to github and then redeploy ourselves to the appropriate nodes.

What Kay was getting at there is a bit difficult for me to follow, but I'm pretty sure that he meant something different than that. Something more specific and constrained.

What you're describing sounds to me like a very LISP-y view of data. Alan Kay, by his own admission, wasn't very versed in the LISP tradition - based on what he's describing, at the time he was pretty deeply immersed in the more ALGOL-y side of the world, back in a time long before there was much cross-pollination between the two camps, and that would presumably have influenced his thinking.

So, semi-wild conjecture: It seems to me like what he was really getting at is that he wanted to get rid of the heap. Or at least shove it deeply into "implementation detail" territory, the same way languages like Haskell try to banish any evidence of the existence of memory registers from the programmer's consciousness.

In so many languages in the imperative tradition, the standard practice was to have most your data living out there, in this giant pile of shared memory, more or less available for anyone to see. I think that, even before the birth of C, most academics recognized that as a source of liability; a lot of ink was spilled dancing around possible solutions to that problem in the mid-late 20th century and beyond. What Kay is saying in TFA seems, to me, to place his thinking clearly in the middle of that whole scene.

The way I see it is that the FP school is about the microcosm like quantum mechanics, deciding what are the best possible building blocks of programs, the functions, which are pure and referentially transparent and immutable.

OO is looking at the macro level: What are the systems how they communicate in a way that preserves the referential integrity of the system as it evolves over time. OO is like cosmology, structure of things on a large scale.

FP = Quantum Mechanics

OO = Relativity

OOP is not very good on large scales; in fact, it shines at medium scales. Even programs of a few thousand lines of code can greatly benefit from OOP.

We don't know what works at scales that are so large that they are described with cosmological words.

Is this the opposite of global state management paradigms like Redux?

Using the global state requires a lot of contortion on each user's end (bureaucratic hoops) for the safety of the global state. It technically empowers every individual user with the whole state but the combination of not having to think about messaging and just brute force contorting into the global state creates terrible code IMO.

> I wonder if the OOP "style" of obfuscating program behaviour by using more code (class hierarchies, overridden functions, etc.)

Class hierarchies have benefits and drawbacks, but the very idea is that they result in less code (by reuse). The whole point is to make behaviors more apparent, not less. Things really go off the rails when we only consider programming languages and not programming environments — a decent environment will make behaviors very apparent!

The unit of reuse in programming is the function. Inheritance is just an often cumbersome mechanism to define bundles of functions.

Data structure can certainly be units of reuse. Widgets in a GUI toolkit for example.

Data structures are just functions that happen to operate on some common (usually opaque) data.

OO is a great example of Chinese whispers. The original idea behind it is more in line with Erlang or functional programming in general than what we have today in Java or Python.

That’s why I’m adamant that existing OO languages need to die. They’re abominations of good ideas that make things worse. C++, Java, Python, are all founded I’m confused interpretations of OO. OO has also become so synonymous with these implementations that we should just let the paradigm go entirely and start afresh.

C++ isn't like Java in this sense, it's a multi paradigm language and you can use it effectively without ever doing Java style OOP. The STL is mostly OOP free because Alexander Stepanov recognized that it was mostly a bad idea and a lot of modern C++ doesn't use inheritance, virtual functions, abstract classes, getters and setters or a lot of the other Java style OOP junk except sometimes as an implementation detail.

Apparently now it is common to forget that Java style OOP junk was originally mid-90's C++.

STL builds up on OOP abstractions, class templates, static method dispatch, type polymorphism, abstract classes for allocators, iostreams, base classes for data structures common methods.

STL is just the containers / iterators / algorithms part of the standard library, the part that Alexander Stepanov designed and originally implemented. Most of the less pleasant parts of the standard library like iostreams come from elsewhere.

Templates, static method dispatch and type polymorphism are not really part of OOP, certainly not Java style OOP. Abstract classes for allocators are a very recent addition with polymorphic allocators and are a good example of OOP features being used in modern C++ as more of an implementation detail (a way to get dynamic dispatch / type erasure).

It is C++ OOP, regardless of how Java does it.

Java is not the last word in what OOP means in practice, and is a younger language, which took many OOP ideas from C++ OOP libraries that were current when Java was designed.

In fact something like J2EE was a relief versus using CORBA or DCOM/MTS, kings of OOP boilerplate.

Or the surviving king of C++ GUI frameworks, Qt.

As for the STL part you refer to, the original one, containers / iterators / algorithms, it makes use of C++ classes, methods, aggregation and delegation, which are definitely OOP.

OOP doesn't have a rule that it is only OOP when all concepts are used in every single class.

The STL is more derived from Abstract Data Types than from OOP but there is overlap between those. Basically the good parts of OOP is ADTs and are embodied in the STL, the bad parts are most of the rest and are embodied in Java.

It is still OOP from CS point of view, regardless how you want to sell it and keep hand waving the fact that Java got it from C++.

The Gang of Four book uses Smalltalk and C++ for its pattern examples, Java wasn't even invented when the first edition came out.

To bring it back to the original comment I was replying too and to the original email, Alan Kay says in the email:

> OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP. There are possibly other systems in which this is possible, but I'm not aware of them.

He was writing in 2003 so presumably didn't / doesn't consider Java or C++ OOP languages by his definition.

The comment I was originally replying to said:

> C++, Java, Python, are all founded in confused interpretations of OO. OO has also become so synonymous with these implementations that we should just let the paradigm go entirely and start afresh.

And I was taking issue with C++ being included with Java in this (I think Python shouldn't really be included either). C++ is a multi paradigm language and while I agree the OOP parts of it that it has in common with Java are neither very good nor what Alan Kay meant by OOP, C++ supports other better programming paradigms better than Java does.

The good parts of C++ style OOP are mostly the ADT bits. The less good parts are inheritance and virtual functions. Design Patterns makes heavy use of the less good parts of that style of OOP and most of the problems it addresses are better solved in other ways that modern C++ supports quite well.

I’ve never once worked in a C++ code base that didn’t make use of OO. As long as it’s available, it will be used.

For most of the C++ code I write removing inheritance and virtual functions from the language wouldn't have much impact (although a few abstractions might have to be implemented differently under the hood). The same cannot be said for Java.

C++ isn't object oriented, it's a hodge-podge mix of many things. It did start out as "C with objects", but that implies it's also C which largely operates without the OO notions of objects.

I've never heard it discussed before, but I think the "no data" part is important. Languages without pattern matching (or a few other exotic alternatives) really do not allow for proper elimination. Asynchronous message passing can be thought of as a weird continuation passing variation (you store your continuation in the message receiving code). This means all elimination is done remotely on your behalf, seemingly hiding what is a nastier part of the language.

But once you get proper elimination, now your messages can be more complex, and chained pipielines are very useful so better be synchronous if you weren't already, and then you have combinators that only shallow modify your messages so might as well immutably share memory to avoid copying, and well those objects you create and tear down on the fly might as well not be stateful because they're so short lived and woah, now we have functional programming.

"OOP to me means only messaging, local retention and protection and

hiding of state-process, and extreme late-binding of all things."

So this means that languages like Swift, for example, are not OOP?

Almost no oop languages are actually oop according to Alan Kay.

I think his OOP is closest to what we'd call actors today.

Smalltalk Inheritance was regarded as a mistake by the Smalltalk intelligentsia in the early 2000's. Message passing and polymorphism was where it was at. Functional programming was discussed by them back then. My own area of interest right now involves Actors with pluggable lambdas.

From his talk on OOPSLA[1], I'd say that only Smalltalk and Common Lisp (through CLOS) were able to perform OO as Kay envisioned.

Erlang is another story: it would probably be better to call it just actor model. But again, Kay himself says[2] that there is not a lot of difference between actor model and his OOP.

[1] https://www.youtube.com/watch?v=oKg1hTOQXoY

[2] https://www.quora.com/What-is-the-difference-between-Alan-Ka...

Add Self to your list, which was even purer than Smalltalk since it got rid of the classes and let the objects stand by themselves.

It's not just the actor model, though. In the actor model, all the actors are contributors to the conversation. The subjects of conversation are something else. In object oriented programming, the subjects of conversation are the same category of things as the agents of conversation, and may move back and forth freely.

The simplest way to see the difference is to think about how a new contributor to the conversation is added. In the actor model, some actor forks a new actor in response to a message. If I have built up the state to describe a new actor, there is still a step to bring it to life that changes it from one thing to another.

In object oriented programming, when you build up that state, it may become a contributor to the conversation at any point in time and then go back to being a subject of conversation, or do both at once. There is no switch.

Ruby gets close (which is one of the reasons I use it a lot):

* Function calls are messages sent to objects, which can be sent to any object directly with Object#send and received as messages when an object[1] responds to #message_missing.

* Local state is stored in objects as @foo variables. Ruby is a multi-paradigm language, so it's not strict about enforcing the privacy of local state, but that's rarely a big deal. Ruby softly encourages using an OOP message-passing style with e.g. Module.attr_accessor defining simple accessor wrappers instead of reading local state directly.

* The extreme lateness and mutability of binding in Ruby is (in)famous for making ruby slow and hard to optimize.

[1] I don't mean "defined in the class"; changes can be specific to a particular object instance.

    obj = Object.new
    obj.define_singleton_method(:method_missing) do |method,*args|
        puts method.to_s.capitalize
    obj.hello!   # prints "Hello!"

Basically Erlang.

And Smalltalk, obviously.

Did Simula fit that description fully? I've never seen a Simula program in my life.

Simula was retroactively given the label "object-oriented", but the more appropriate term (according to Smalltalkers) for it would be "class-oriented" like its descendants C++/Java.

Fortunately it does not matter, because the industry seems to have mostly gotten out of the "if it's not X-oriented it's bad" dogmatic mindset that spurred the "OOP craze" of the 90s and early 2000s.

I dunno about that. Try being a JS dev that likes mutability and dislikes static typing.

If it's not JSON-oriented, it's bad.

If it doesn't use machine learning, it's bad.

If it doesn't use blockchain, it's bad.

Alan Kay said in some OOPSLA address that microservices were OOP.

I actually wanted to post the same thing.

Alan Kay's view of OOP is mostly just the actor model. But the actor model, like OOP, does not tell us how to actually build programs. "When do I use an object?" is a question OOP does not answer, "When do I use an actor?" is similarly not answered by the actor model.

Service Oriented Architecture is also effectively the actor model, but with a focus on the persistent process abstraction.

What Microservices does is take the actor model and say:

* Here's how you figure out where the actor abstractions should go, how large they should be, and patterns for interaction

Where microservices soooort of diverges is in its lack of discouragement of synchronous communication. But if you build your microservices using queues, I think you get the right sized actors with all of those benefits, without going down a rabbit hole of trying to structure every single bit of logic as an actor.

AWS's "Cell Based Architecture" is also just actors, but with its own set of patterns for how large to build them.

> So this means that languages like Swift, for example, are not OOP?

Swift claims to be protocol-oriented, actually. [0] is a video about it from Apple's WWDC in 2015.

[0] https://developer.apple.com/videos/play/wwdc2015/408/

Though that is a largely vacuous marketing claim, IMHO.


I don't think I agree with your claim here that it's a "largely vacuous marketing claim", but I do agree with most of what you wrote in that article.

I agree with the article that POP is OOP in a very real sense. As the article puts it: "The simple fact is that actual Object Oriented Programming is Protocol Oriented Programming, where Protocol means a set of messages that an object understands." This I 100% agree with.

But the fact of the matter is that many people do not consider the message-passing to be what defines OOP these days. I'm not saying these people are right, but rather that the common usage of the term does not align with the original use. When people teach OOP today, they teach it in terms of inheritance and methods and access modifiers. I don't think the word "message" even came up in this context once in my undergraduate studies.

I think Swift's usage of the term "protocol-oriented programming" is to distinguish themselves from the modern concept of OOP. If they said "Swift is an OO language, but we try to avoid classes and inheritance where possible", the developers trained in programs like mine would lose their minds because to them the two are one and the same.

So I don't think it's a "marketing claim" in the sense that I don't think they're using it to say "Ooh, look, we developed a whole new paradigm of programming!" Rather, I think they're trying to distinguish themselves from languages like Java (the current paragon of OOP it seems) which are 100% based on inheritance and encapsulation and couldn't care less for passing messages (explicitly).

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact