Hacker News new | past | comments | ask | show | jobs | submit login

It really should be noted that years later Joe changed his mind about OO and came to the realization that perhaps Erlang is the only object-oriented language :) From a 2010 interview:

..."I wrote a an article, a blog thing, years ago - Why object oriented programming is silly. I mainly wanted to provoke people with it. They had a quite interesting response to that and I managed to annoy a lot of people, which was part of the intention actually. I started wondering about what object oriented programming was and I thought Erlang wasn't object oriented, it was a functional programming language.

Then, my thesis supervisor said "But you're wrong, Erlang is extremely object oriented". He said object oriented languages aren't object oriented. I might think, though I'm not quite sure if I believe this or not, but Erlang might be the only object oriented language because the 3 tenets of object oriented programming are that it's based on message passing, that you have isolation between objects and have polymorphism.

Alan Kay himself wrote this famous thing and said "The notion of object oriented programming is completely misunderstood. It's not about objects and classes, it's all about messages". He wrote that and he said that the initial reaction to object oriented programming was to overemphasize the classes and methods and under emphasize the messages and if we talk much more about messages then it would be a lot nicer. The original Smalltalk was always talking about objects and you sent messages to them and they responded by sending messages back."

See https://www.infoq.com/interviews/johnson-armstrong-oop (2010) for the full answer (and more), it's worth a read.




That speaks to one of the things that bothers me about OOP's intellectual traditions: there are two different ideas of what "object" can mean, and most object-oriented languages and practices deeply conflate the two.

On the one hand, "object" can mean a unification of data structures with the procedures that act on them. In this view, the ideal is for everything to be an "object", and for all the procedures to actually be methods of some class. This is the place from which we get both the motivation for Java's ban on functions that don't belong to classes, and the criticism of Java as not being truly OO because not every type is an object. In this view, Erlang is not OO, since, at the root, functions are separate from datatypes.

On the other hand, "object" can describe a certain approach to modularity, where the modules are relatively isolated entities that are supposed to behave like black boxes that can only communicate by passing some sort of message back and forth. This ends up being the motivation for Java's practice of making all fields private, and only communicating with them through method calls. In this view, Erlang is extremely OO, for all the reasons described in parent.

I haven't done an exhaustive analysis or anything, but I'm beginning to suspect that most the woes that critics commonly describe about OO come from the conflation of these two distinct ideas.


dont forget inheritance. its either orthogonal or essential to what it means to be 'object oriented' depending on who you are talking to.


I haven't, but, at least insofar as my thinking has developed (and insofar as Erlang supports it), the question of inheritance is more orthogonal than essential to the specific point I was trying to make. And failed to state clearly, so here it is: This essay is right, and Armstrong is also right when he said "Erlang might be the only object-oriented language". The tension there isn't, at the root, because Armstrong was confused about what OOP is really about; it's because OOP itself was (and is) confused about what OOP is really about.

That said, I would also argue that, like "object", "inheritance" is a word that can describe many distinct concepts, that and here, too, OOP's intellectual traditions create a muddle by conflating them.


> dont forget inheritance

Inheritance is a limited convention to do mixins. Including it in the abstract idea of object oriented programming is harmful, other than in reference to the ugly history of "Classical OOP" or "Non-Kay OOP" as you like.


“I mainly wanted to provoke people...” I hate this. I see it way too often. It’s either a cop-out to avoid having to own up to your arguments or its just poisonous rhetoric in the first place that contributes to partisan opinions, especially when the speaker has an air of authority that causes people to accept what they say at face value. It is directly antithetical to critical thinking.


> It is directly antithetical to critical thinking.

I don't think it is.

Yes, it can get some people to just lash out in response.

But it also often forces people to think critically about how to convincingly justify their own standpoint to counter the provocation. This can be particularly useful when a viewpoint has "won" to the extent that people just blindly adopt it without understanding why.

It does have it's problems in that it is hard to predict, and there's a risk that measured reactions gets drowned out by shouting, so I'm not going to claim it's a great approach, but it has it's moments.


True, I can see how in this case, at that time, it could be effective. But ironically, there seems to be a similar dogma surrounding FP these days - speaking even as a fan of the paradigm, with a perspective tempered by experience. I can’t help but think that polarized viewpoints like this contribute to replacing the subject of the idealization rather than the underlying problem of idealizing itself, if only indirectly due to the combination of the arguments themselves and the sense of authority behind them, rather than the merit of the arguments alone.


>This can be particularly useful when a viewpoint has "won" to the extent that people just blindly adopt it without understanding why.

Like the blind acceptance of OOP religion (not the message passing kind), since the 90s


Isn't a method call a message, and the return value a message back? Or is it that "true OO" must be asynchronous?


> Isn't a method call a message, and the return value a message back?

It is!

In my view, the point that Alan Kay and Joe Armstrong are trying to make is that languages like C++/Java/C# etc have very limited message passing abilities.

Alan Kay uses the term "late binding". In Kay's opinion, "extreme late binding" is one of the most important aspects of his OOP [1], even more important than polymorphism. Extreme late binding basically means letting the object decide what it's gonna do with a message.

This is what languages like Objective-C and Ruby do: deciding what to do after a method is dispatched always happen during runtime. You can send a message that does not exist and have the class answer to it (method_missing in Ruby); you can send a message to an invalid object and it will respond with nil (Objective-C, IIRC); you can delegate everything but some messages to a third object; you can even send a message to a class running in other computer (CORBA, DCOM).

In C++, for example, the only kind of late binding that you have is abstract classes and vtables.

-

> Or is it that "true OO" must be asynchronous?

It doesn't have to be asynchronous, but in Alan Kay's world, the asynchronous part of messaging part should be handled by that "dispatcher", rather than putting extra code in the sender or the receiver.

I don't remember Alan Kay elaborating on it, but he discusses a bit about this "interstitial" part of OOP systems in [2]

-

[1] - https://en.wikipedia.org/wiki/Late_binding

[2] - http://wiki.c2.com/?AlanKayOnMessaging


C++'s vtable is also late binding, since you don't know which implementation you're calling until runtime. And there's no such thing as "extremely late binding".

> In C++, for example, the only kind of late binding that you have is abstract classes and vtables.

That's not true, you can always have a "send_message(string id)". Few people do it because you lose static type safety. And some languages, like C# and Scala, have dynamic types that allows for the "method_missing" protocol and such features are very unpopular.

To be honest I don't see much of a difference. I've worked with a lot of dynamic OOP languages, including with Erlang-style actors and I've never seen the enlightenment of dynamic OOP message passing.

And I actually like OOP, but I don't really see the point of all this hyperbole about Smalltalk.


> That's not true, you can always have a "send_message(string id)". Few people do it because you lose static type safety. And some languages, like C# and Scala, have dynamic types that allows for the "method_missing" protocol and such features are very unpopular.

That is the difference. If every class in C++ had only one method - send_message and each object is an independent thread, you will get how Erlang works. That is how you would do the actor model in C++.

Inheritance, Polymorphism is emphasised in Java, C++ and C#, whereas Functional programmers emphasise function objects / lambdas / Command Pattern where you just have one method - calling the function. Infact having just method you no longer need Polymorphism / Interfaces.


What? This has nothing to do with functional programming.

FP needs polymorphism too and as a matter of fact FP tends to be even more static.

In FP we have type classes, built via OOP in static OOP languages.

> Infact having just method you no longer need Polymorphism / Interfaces.

That’s false.


It's not. You can use multiple dispatch.


But then why would you? Isn't ditching type safety a bad idea most of the time ?


> C++'s vtable is also late binding, since you don't know which implementation you're calling until runtime. And there's no such thing as "extremely late binding".

C++'s vtables are determined at compile time. The specific implementation executed at a given moment may not be possible to deduce statically, but the set of possible methods is statically determined for every call site: It consists of the set of overridden implementations of the method with that name in the class hierarchy from the named type and downwards.

No such restriction exists in Ruby or Smalltalk or most other truly dynamic languages. E.g. for many Ruby ORM's the methods that will exist on a given object representing a table will not be known until you have connected to the database and read the database schema from it, and at the same time I can construct the message I send to the object dynamically at runtime.

Furthermore the set of messages a given object will handle, or which code will handle it can change from one invocation to the next. E.g. memoization of computation in Ruby could look sort-of like this:

    class Memo
      def method_missing op
         result = ... execute expensive operation here ...
         define_singleton_method(op) { return result }
      end
    end
After the first calculation of a given operation, instead of hitting method_missing, it just finds a newly created method returning the result.

"Extreme late binding" is used exactly because people think things like vtables represent late-binding, but the ability to dynamically construct and modify classes and methods at runtime represents substantially later binding.

E.g. there's no reason why all the code needs to be loaded before it is needed, and methods constructed at that time And incidentally this is not about vtables or not vtables - they are an implementation detail. Prof. Michael Franz paper on Protocol Extension [1] provided a very simple mechanism for Oberon that translates nicely to vtables by dynamically augmenting them as code is loaded at runtime. For my (very much incomplete) Ruby compiler, I use almost the same approach to create vtables for Ruby classes that are dynamically updated by propagating the changes downwards until it reaches a point where the vtable slot is occupied by a different pointer than the one I'm replacing (indicating the original method has been overridden). Extending the vtables at runtime (as opposed to adding extra pointers) would add a bit of hassle, but is also not hard.

The point being that this is about language semantics in terms of whether or not the languages allows changing the binding at runtime, not about the specific method used to implement the method lookups semantics of each language - you can implement Ruby semantics with vtables, and C++ semantics by a dictionary lookup. That's not the part that makes the difference (well, it affects performance)

> That's not true, you can always have a "send_message(string id)". Few people do it because you lose static type safety. And some languages, like C# and Scala, have dynamic types that allows for the "method_missing" protocol and such features are very unpopular.

If you're working in a language with static typing you've already bought into a specific model; it's totally unsurprising that people who have rejected dynamic typing their language choice will reject features of their statically typed language that does dynamic typing. I don't think that says anything particularly valuable about how useful it is. Only that it is generally a poor fit for those types of languages.

[1] Protocol Extension: A Technique for Structuring Large Extensible Software Systems, ETH Technical Report (1994) http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.42....


The only good thing about OO as a architecture is, that there is nearly no education required to introduce it to the most novice in the field. Its basically the default thinking approach rebranded. It comes with all the benefits of a mental model - quick orientation, and all the negative of a mental model. (Badly adapted to fit to machine execution, after acertain complexity level is reached - god like actorobjects - basically programmers in softwaredisguise- start to appear).


Disagree. The original design patterns book was really about ways oop should be used that don't fit people's everyday conception of objects. (Of course that causes different problems for the novice keen to use the patterns but that's another story)


I'm surprised the actor model hasn't been mentioned. Isn't this the modern name for what theyre talking about?

Completely independent objects passing messages and entirely parallelizable.



My first exposure to the actor model was with Akka on Scala. After working with it for a little while, I thought "this is what OOP should be, perhaps I just hate broken implementations of OOP (i.e., Java, C++), rather than OOP itself." Heck, I like Ada95's implementation of OOP better than Java's.

I keep meaning to give Erlang a try, but just haven't had a reason yet. I do a lot of Clojure, these days :)


If you like Akka/Scala, definitely give Erlang a try.


I can highly recommend Elixir as a pleasant entry point. I've been looking into learning Erlang too though, much as the syntax is a bit daunting.


what does late binding by you? That sounds like an argument for non-strictly typed languages. Isn't in the strict typing that prevents late binding? The compiler wants to know at compile time the types of all the messages and whether or not an object can handle that message hence all messages must be typed and all object must declare which messages they accept.


> what does late binding by you?

Some things that come to mind:

- Abstract classes/methods, and interfaces. This is implemented using vtables in C++.

- Ability to send messages asynchronously, or to other computers, without exposing the details of such things. You just call a method in another class and let your dispatcher handle it. There was a whole industry built around this concept in the 90s: CORBA, DCOM, SOAP. And Erlang, of course, in a different way.

- Ability to change the class/object during runtime. Like you can with Javascript and Lua, calling `object.method = `. Javascript was inspired by Self (a dialect of Smalltalk), so there's that lineage. Other languages like Python and Ruby allow it too.

- Ability to use the message passing mechanism to capture messages and answer them. Similar to Ruby's "method_missing" and ES6 Proxies in Javascript. This is super useful for DSLs and a great abstraction to work with. Check this out: http://npmjs.com/package/domz

Remember that you can have some of those things without dynamic typing (Objective-C).


Objective c is a compiled language


Objective C is compiled, yes, but like Smalltalk OOP, the target object of the message is resolved and interpreted by that object at runtime.


Objective-C is much closer to true object orientation than C++, but IMO Apple neutered it by having the program crash if there was no message handler.


It crashes only if you let it.

a) The crash is from the default unhandled exception handler, which will send a signal to abort. So if you just want to crash, you can either handle that particular exception or install a different unhandled exception handler

b) An object gets sent the -forwardInvocation: message when objc_msgSend() encounters a message the object does not understand. The exception above gets raised by the default implementation of -forwardInvocation: in NSObject.

    o := NSObject new.
    o class
    -> NSObject
    n := NSInvocation invocationWithTarget:o andSelector: #class
    n resultOfInvoking class 
    -> NSObject
    o forwardInvocation:n 
    2019-04-22 07:49:12.339 stsh[5994:785157] exception sending message: -[NSObject class]: unrecognized selector sent to instance 0x7ff853d023c0 offset: {
(This shows that -forwardInvocation: in NSObject will raise that exception, even if the NSInvocation is for a message the object understands)

If you override -forwardInvocation:, you can handle the message yourself. In fact, that is the last-ditch effort by the runtime. You will first be given the chance to provide another object to send the message to ( - (id)forwardingTargetForSelector:(SEL)aSelector; ) or to resolve the message in some other way, for example by installing the method ( + (BOOL)resolveInstanceMethod:(SEL)sel; )[0].

Cocoa's undo system is implemented this way[1], as is Higher Order Messaging[2][3]

[0] https://developer.apple.com/documentation/objectivec/nsobjec...

[1] https://developer.apple.com/documentation/foundation/nsundom...

[2] https://en.wikipedia.org/wiki/Higher_order_message

[3] https://github.com/mpw/HOM/blob/master/HOM.m


Back when I wrote a lot of obj-c is when I really 'got' message passing vs. a function call. I miss obj-c, but everyone wants to move on to Swift.


Wasn't the design decision (and implementation) involved in place log before Apple had anything to do with it?


NextStep adopted it but did not invent it. Once Apple acquired NextStep and released OS X they were the only major company supporting it and had defacto control over the language.

The complaint I have is with NSObject which can be blamed on Next Step. Although another comment pointed out I just didn’t know about a workaround.


There were two different major mutually-incompatible “flavors” of Objective-C (my first book on Objective-C covered both, and my first Objective-C programming was done on a NeXTcube), one of which originated at NeXT (NextStep was the OS that was NeXTs last major surviving product after they dropped hardware, not the company.)


Extreme Late binding: for "The Pure Function Pipeline data Flow", attaching data or metadata to the data flow, then the pipeline function parses it at run time, which is simpler, more reliable, and clearer.


C++, Java etc. all lack proper union types with appropriate pattern matching. So a lot of useful message passing patterns cannot be implemented without too much boilerplate.


I think the spirit of OO, an object has agency over how the message is interpreted in order for it to be considered a message. If the caller has already determined for the object that it is going to call a method then the object has lost that agency. In a 'true OO' language an object may choose to invoke a method that corresponds to the details within the message, but that is not for the caller to decide.

Consider the following Ruby code:

    class MyClass
      def foo
        'bar'
      end
    end

    class MyClass
      def method_missing(name, *args, &block)
        if name == :foo
          return 'bar'
        end
        super
      end
    end
To the outside observer, the two classes are effectively equivalent. Since, conceptually, a caller only sends a message `foo`, rather than calling a method named `foo`, the two classes are able to make choices about how to handle the message. In the first case that is as simple as invoking the method of the same name, but in the second case it decides to perform a comparison on the message instead. With reception of a message, it is free to make that choice. To the caller, it does not matter.

If the caller dug into the MyClass object, found the `foo` function pointer, and jumped into that function then it would sidestep the message passing step, which is exactly how some languages are implemented. In the spirit of OO, I am not sure we should consider such languages to be message passing, even though they do allow methods to be called.


Is it unreasonable to think of the method as a semantic "port" to which messages (arguments) are passed?

And languages that allow programmers to bypass OO with jmp instructions seem multiparadigm rather than not-OO...


> semantic "port"

Not unreasonable at all! In fact the term used in objective C and Ruby is “selector”. Beneath the synchronous veneer anyway.


vtables is an implementation detail. To compile Ruby with vtables, consider this:

    class A
      def foo; end
    end
    
    class B < A
      def foo; end
      def bar; end
    end
Now you make a vtable for class A that looks conceptually something like this:

    slot for foo = address_of(A#foo)
    slot for bar = method_missing_thunk(:bar)
And a vtable for class B that looks like this:

    slot for foo = address_of(B#foo)
    slot for bar = address_of(B#bar)
The point being that you can see every name used in a method call statically during parsing, and can add entries like `method_missing_thunk(:bar)` to the vtable, that just pushes the corresponding symbol onto the stack and calls a method_missing handler that tries to send method_missing to the objects.

You still need to handle #send, but you can do that by keeping a mapping of symbols => vtable offset. Any symbol that is not found should trigger method_missing; that handles any dynamically constructed names, and also allows for dynamically constructed methods with names that have not been seen as normal method calls.

When I started experimenting with my Ruby compiler, I worried that this would waste too much space, since Ruby's class hierarchy is globally rooted and so without complicated extra analysis to chop it apart every vtable ends up containing slots for every method name seen in the entire program, but in practice it seems like you need to get to systems with really huge amounts of classes before it becomes a real problem, as so many method names gets reused. Even then you can just cap the number of names you put in the vtables, and fall back to the more expensive dispatch mechanism for methods you think will be called less frequently.

(redefining methods works by propagating the new pointer downwards until you find one that is overridden - you can tell it's overridden because it's different than the pointer at the site where you started propagating the redefined method downwards; so this trades off cost of method calls with potentially more expensive method re-definition)


What is the advantage of doing that instead of using an IObservable that can filter on the event name in C# or, even better in F#, having an exhaustive pattern match that automatically casts the argument to the expected type and notifies you at compile time if you forgot to handle some cases?


In Kay's OO the only way to interact with an object was through method passing. It was important the the internal state of an object was kept private at all times.

Getters/setters are technically message-passing methods, but they undermine the design goal because they more or less directly expose internal state to the public world.

But we see getters/setters used constantly. People don't use OO in the way Kay intended. Yes, methods are the implementation of the whole "message passing" thing Kay was talking about, but we see them used in ways he did not intend.


In my experience, getter/setter abuse is always an attempt to use classes as a structs/records.

I wonder if we had different syntax for those cases we'd have less of them.

But then, again, it's very convenient to be able to add a method to a class that was previously a dumb struct.


Maybe I am a complete philistine but is that really a bad thing or just something which goes against their categorism? I get that there are some circumstances where setters would break assumptions but classes are meant to be worked with, period.


Objects are meant to have a life cycle in which the state should only be changed by the object itself. Setters violate this idea by allowing the sender of the message direct control over the state of the object.

A simplistic example: account.deposit(100) may directly add 100 to the account's balance and a subsequent call to account.balance() may answer 100 more than when account.deposit(100) was called. But those details are up to that instance of the account not the sender of those messages. The sender should not be able to mutate account.balance directly, whether it be via direct access to the field or through the proxy of a setter.


Well... a setter is the object changing its own state. That's why the setter has to be a member function.

I would say instead that an object shouldn't have setters or getters for any members unless really necessary. And by "necessary", I don't mean "it makes it easier to write code that treats the object as a struct". I mean "setting this field really is an action that this object has to expose to the external world in order to function properly". And not even "necessary" because I coded myself into a corner and that's the easiest way out I see. It needs to be necessary at the design level, not at the code level.


It depends, most of the time is better to have separate functions that transform your data rather than have methods and state conflated together. But obviously it depends from the context.


Yeah there are no hard and fast rules but a lot of time transformations can be in the object as well. If I need a function to transform Foo to Bar I could just as easily send a toBar() message to an instance of Foo.


I think c# really got the best of both worlds with extensions methods, where you can actually define functions that act on a object but are separated from the actual class definition. I still think that pure functions and especially higher kinded types are better probably, although I have no direct experience with Haskell type classes, scala implicits and ocaml modules..


It's not exactly a bad thing, it's just that you're using a hammer (class) when what you actually need is a screwdriver (struct/record).

Abusing getters/setters is breaking encapsulation (I said abusing, light use is ok). If you're just going to expose all the innards of the class, why start with a Class?

The whole point of object orientation to put data and behavior together. That's probably the only thing that both the C++/Java and the Smalltalk camp agrees on.

Separating data and the behavior into two different classes breaks that. You're effectively making two classes, each with "half of a responsibility". I can argue that this breaks SRP and the Demeter principle in one go.

Another thing: Abuse of getters/setters is often a symptom of procedural code disguised as OOP code. If you're not going to use what is probably the single biggest advantages of OOP, why use it at all?

-

Here's an answer that elaborates on this that I like:

https://softwareengineering.stackexchange.com/questions/2180...


> The whole point of object orientation to put data and behavior together

May I politely disagree based on my long-ago experience with dylan, which has multi-methods (<https://en.wikipedia.org/wiki/Multimethods>). This allowed the action on the data (the methods) to be defined separate from the data. I strongly feel that it was OO done right, and it felt right. You can read about it on the wiki link but it likely won't click until you play with it.

I'd like to give an example but it's too long ago and I don't have any to hand, sorry.


It’s a different semantic in my opinion. Even in mutable objects it’s better to have setters that act only on the field that they are supposed to mutate and do absolutely nothing else. If you need a notification you can raise an event and then the interested parties will react accordingly. By mutating directly an unrelated field, or even worse, call an unrelated method that brings complete havoc to the current object state, in the setter you are opening yourself to an incredible amount of pain.


I disagree, slightly. A setter (or any method, for that matter) has to keep the object in a consistent state. If it can't set that one field without having to change others, then it has to change others.

Now, if you want to argue that an object probably shouldn't be written in the way that such things are necessary, you're probably right. And if you want to argue that it should "just set the one field in spirit" (that is, that it should do what it has to to set the field, but not do unrelated things), I would definitely agree with you. But it's not quite as simple as "only ever just set the one field".


> Getters/setters are technically message-passing methods, but they undermine the design goal because they more or less directly expose internal state to the public world.

No, they don't, because “more or less” is not actually directly. Particularly, naive getters and setters can be (and often are) replaced with more complex behavior with no impact to consuming code because they are simply message handlers, and they abstract away the underlying state.


> No, they don't, because “more or less” is not actually directly.

I disagree.

Consider a `Counter` class, intended to be used for counting something. The class has one field: `Counter.count`, which is an integer.

A setter/getter for this field would be like `Counter.setCount(i: Int)` and `Counter.getCount() -> Int`. There is no effective difference between using these methods and having direct access to the internal state of the object.

A more "true OOP" solution would be to use methods with semantic meaning, for example: `Counter.increment()`, `Counter.decrement()`, and `Counter.getCount() -> Int`. (Yes, the getter is here because this is a simple example.) These kinds of methods are not directly exposing the internal state of the object to be freely manipulated by the outside world.

If your getter/setter does something other than just get/set, then it's not really a getter/setter anymore — it's a normal method that happens to manipulate the state, which is fine. But using getters/setters (in the naive, one-line sense) is commonplace with certain people, and I feel that their use undermines the principles Kay was getting at.


I have seen side effects for completely unrelated fields in setters. Heck, I’ve even witnessed side effects in bloody getters. This is the reason why now I’m a huge fan of immutable objects. Actually nowadays I became a fan of functional languages with first class immutability support.


> but they undermine the design goal because they more or less directly expose internal state to the public world.

This has always been my problem with getters and setters. It's a way of either pretending you are not or putting bandaids on the fact that you're messing with the objects internal state. For objects with dynamic state this is really bad. The result is racy or brittle.


> Getters/setters are technically message-passing methods, but they undermine the design goal because they more or less directly expose internal state to the public world

If they do, that's your fault for letting them. I guess you mean when people chain stuff thus

company.programmers.WebDevs.employ('fred')

where .programmers and .WebDevs is an exposed internal of the company and programmers department respectively? (I've seen lots of this, and in much longer chains too. We all have). In which case please see the Principle of Demeter <https://en.wikipedia.org/wiki/Law_of_Demeter> which says don't do this. Wiki article is good.

I doubt any language can prevent this kind of 'exposing guts' malpractice, it's down to the humans.

I remember reading that Alan Kay said when he saw the Linda model (<https://en.wikipedia.org/wiki/Linda_(coordination_language)>) he said it was closer to what he wanted smalltalk to be.


> I doubt any language can prevent this kind of 'exposing guts' malpractice

Actually, true OOP languages do prevent this. Internal state is completely private and cannot be exposed externally. The only way to interact with an object's state is through its methods — which means the object itself is responsible for knowing how to manipulate its internal state.

Languages like Java are not "true" OOP in this sense, because they provide the programmer with mechanisms to allow external access to internal state.

Internal state should be kept internal. You shouldn't have a class `Foo` with a private internal `.bar` field and then provide public `Foo.getBar()` and `Foo.setBar()` methods, because you may as well just have made the `.bar` field public in that case.

Also, FWIW, I did not downvote you. I dunno why you were downvoted. Seems you had a legitimate point here, even if I disagree with it.


> Internal state should be kept internal.

I'm not sure that's a proven model. It's a proposed model, for sure. Since you can't protect memory from runtime access, you can't really protect state, so it's a matter of convention which Python cleverly baked in (_privatevar access).


Ah sorry, I was speaking in the context of Kay's OOP! In retrospect my phrasing made it seem like I was stating an opinion as fact, but what I meant was just that Kay's OOP mandated that internal state could not be exposed and was very opinionated on the matter.


Why downvoted? I don't mind being wrong but would like to know where and why.


When I think of message passing, I think of message queues. There should be an arbiter, a medium of message passing so you can control how that message is passed and how it will arrive.

Java and C++ way of message passing both stripped that medium down to a simple vtable to look up what methods the object has. Erlang and go have the right idea of passing messages through a medium that can serialize and multiprocess it. C# tries to do with further abstractions like parallelized linq queries and C#, python and nodejs use async/await to delegate the messages to event queues. Python can also send messages to multiple processes. All this shows us that message passing requires a medium that primitive method calls lack.


>both stripped that medium down to a simple vtable to look up what methods the object has.

If they use vtable it'd be just slow. Not needing the trampoline and ability to inline harder is what makes it fast. The usual case is class hierarchy analysis,static calls (no more than a single implementer proven by the compiler), guarded calls (check +inline, java deoptimizes, if need be), bi-morphic call site inline, inline caches and if that fails - the vtable thing.

Message passing in a classical way is just awfully slow for a bottom of the stack building block. It doesn't map to the hardware. It does makes sense for concurrency with bounded, lock free queues (actor model). But at some point, someone has to do the heavy lifting.


I suppose C++-style method calls are a limited form of OO, without asynchronicity, running in independent threads when required, no shared state, ability to upgrade or restart a failed component...


No, it does not have to be async. My impressions from using Squeak regarding this matter:

1. You can send any message to any object. In case the object does not have a suitable handler, you will get an exception: <object> does not understand <message>. The whole thing is very dynamic.

2. There is no `static` BS like in c# or java. This is because each method has to be a method of an object. For each class there is a metaclass which is an object too, see: https://en.m.wikipedia.org/wiki/Metaclass#/media/File%3ASmal...


You can implement Smalltalk like patterns in C# via dynamic types and expression trees.


> "The notion of object oriented programming is completely misunderstood. It's not about objects and classes, it's all about [function calls]."


> It really should be noted that years later Joe changed his mind about OO and came to the realization that perhaps Erlang is the only object-oriented language :)

But not in the way he's describing OO in his blog post. He's talking about a language with functions bound to objects and where objects have some internal state. The OO he's describing does not have isolation between objects because you can share aliases freely; references abound.


Nobody can agree on what OOP really is. I've been in and seen many long debates on the definition of OOP. It's kind of a like a Rorschach test: people project their preferences and biases into the definition.

Until some central body is officially appointed definition duty, the definition debate will rage on.


Is this different from ANY other concept in technology? Personal Computing, Big Data, Cloud Computing, Deep Learning, Artificial Intelligence? We never have real definitions for any of these, and if you attempt to make one it will be obsolete before you finish your blog post.

The only real problem I see is that too many technologist insist that there is 'one definition to rule them all' and it's usually the one they most agree with. As long as we all understand that these terms are fluid and can explain the pro's and con's of our particular version we will be fine.


The OO languages we are using should be called class- oriented instead object-oriented.


Mutation-oriented, or maybe obfuscation-oriented


Simula, Smalltalk and CLOS have plenty of mutations.


If pretty much every single implementation of OO languages misunderstood Kay that just means Kay either didn't explain himself well or OO as he intended it is so easy to misunderstand it's almost useless as a programming paradigm. At this point, it really doesn't matter anymore. OO is what OO languages like C++ and Java have made it. The original author in no way has a monopoly or even a privileged viewpoint in the matter. And frankly, I agree with the original article. OO is very poor and leads to a lot of misunderstandings because it has a lot of problems in its core design. It "sucks." It never made much sense to me and clearly it never made much sense to even the people designing languages such as C++ or Java because it's taken decades to come up with somewhat useful self-imposed limitations and rules on how to use OO to not come up with an ugly mess. It's completely unintuitive and out of the box misleads just about every beginner who tries to use it. A programming paradigm should make it obvious how it's supposed to be used but OO does the opposite. It obfuscates how it should be used in favor of paradigms like inheritance that lead users down a path of miser and pain due to complexity and dead ends that require rewriting code. In most cases, it's mostly a way to namespace code in an extremely complicated and unintuitive manner. And we haven't even touched the surface as to its negative influences on data structures.


So, microservices are another attempt at emulating a good pattern with a huge pile of bad ones? :)


Both Alan Kay and Joe Armstrong struck me as having had the same attitude of trying to capitalize on the topic of object oriented programming, failing to recognize its importance, and then later trying to appropriate it by redefining it.

Not the best moment of these otherwise two bright minds.


Didn’t Alan Kay coin the term “object oriented”?


I don't believe he claims to, no.

He coined the term “object,” but what he meant by a computational object was different than what it came to mean: a data structure with associated operations upon it. Kay meant a parallel thread of execution which was generally sitting in a waiting state—one could make a very strong analogy between Smalltalk's vision of “objects” and what we call today “microservices,” albeit all living within the same programming language as an ecosystem rather than all being independent languages implementing some API.

But whether this is an “object-oriented” vision depends on whether you think that an object is intrinsically a data structure or an independent computer with its own memory speaking a common API. The most visible difference is that in the latter case one object mutating any other object's properties is evil—it is one computer secretly modifying another’s memory—whereas in the other case it is shrug-worthy. But arguably the bigger issue is philosophical.

That is hard to explain and so it might be best to have a specific example. So Smalltalk invents MVC and then you see endless reinventions that call themselves MVC in other languages. But most of these other adaptations of MVC have very object-oriented models: they describe some sort of data structure in some sort of data modeling language. But that is not the “object” understanding of a model in Smalltalk. When Smalltalk says “model” it means a computer which is maintaining two things: a current value of some data, and a list of subscribers to that value. Its API accepts requests to create/remove subscriptions, to modify the value, and to read the value. The modifications all send notifications to anyone who is subscribed to the value. There is not necessarily anything wrong with data-modeling the data, but it is not the central point of the model, which is the list of subscribers.

A more extreme example: no OOP system that I know of would do something as barbarous as to implement a function which would do the following:

> Search through memory for EVERY reference to that object, and replace it with a reference to this object.

That just sounds like the worst idea ever in OOP-land; my understanding of objects is as data structures which are probably holding some sort of meaningful data; how dare you steal my data structure and replace it with another. But Smalltalk has this; it is called Object.become. If you are thinking of objects as these microservicey things then yeah, of course I want to find out how some microservice is misbehaving and then build a microservice that doesn't misbehave that way and then eventually swap my new microservice in for the running one. (That also hints at the necessary architecture to do this without literally scanning memory: like a DNS lookup giving you the actual address, every reference to an object must be a double-star pointer under the hood.) And as a direct consequence, when you are running Smalltalk you can modify almost every single bit of functionality in any of the standard libraries to be whatever you need it to be, live, while the program is running. Indeed the attitude in Smalltalk is that you will not write it in some text editor, but in the living program itself: the program you are designing is running as you are writing it and you use this ability to swap out components to massage it into the program that you need it to become.


I didn't coin the term "object" -- and I shouldn't have used it in 1966 when I did coin the term "object-oriented programming" flippantly in response to the question "what are you working on?".

This is partly because the term at the time meant a patch of storage with multiple data fields -- like a punched card image in storage or a Sketchpad data-structure.

But my idea was about "things" that were like time-sharing processes, but for all entities. This was a simple idea that was catalyzed by seeing Sketchpad and Simula I in the same week in grad school.

The work we did at Parc after doing lots of software engineering to get everything to be "an object", was early, quite successful, and we called it "object-oriented programming".

I think this led to people in the 1980s wanting to be part of this in some way, and the term was applied in ways that weren't in my idea of "everything from software computers on a network intercommunicating by messages".

I don't think the term can be rescued at this point -- and I've quit using it to describe how we went about doing things.

It's worth trying to understand the difference between the idea, our pragmatic experiments on small machines at Xerox Parc, and what is called "OOP" today.

The simplest way to understand what we were driving at "way back then" was that we were trying to move from "programming" as it was thought of in the 60s -- where programs manipulated data structures -- to "growing systems" -- like Smalltalk and the Internet -- where the system would "stay alive" and help to move itself forward in time. (And so forth.)

The simplest way to think about this is that one way to characterize systems is that which is "made from intercommunicating dynamic modules". In order to make this work, one has to learn how to design and maintain systems ...


Oh wow.

I was really not expecting you to join this conversation and I am very thankful to have crossed paths with you, even so briefly. Sorry for getting you wrong about the “objects” vs. “OOP” thing.

I have thought you could maybe call it “node-oriented” or “thread-oriented” but after reading this comment I think “ecosystem-oriented” might be more faithful a term?


Alan suggested "server-oriented programming" in Quora:

https://www.quora.com/What-is-Alan-Kays-definition-of-Object...


I think the inspiration from Simula I is something a lot of folks either don't know about, or maybe they know about it but don't recognize its significance. Objects with encapsulated state that respond to well-defined messages are a useful level of abstraction for writing simulations of the sort Simula was built for. They're just not automatically a particularly wieldy abstraction for systems that aren't specifically about simulation. Some (most?) of that is about the skill of the programmer, imo, not some inherent flaw in the abstraction itself.

P.S.: Thank you for all your contributions to our profession, and for your measured response to these kinds of discussions.


It's out of the context of this thread, but we were quite sure that "simulation-style" systems design would be a much more powerful and comprehensive way to create most things on a computer, and most especially for personal computers.

At Parc, I think we were able to make our point. Around 2014 or so we brought back to life the NoteTaker Smalltalk from 1978, and I used it to make my visual material for a tribute to Ted Nelson. See what you think. https://www.youtube.com/watch?v=AnrlSqtpOkw&t=135s

This system --including everything -- "OS", SDK, Media, GUI, Tools, and the content -- is about 10,000 lines of Smalltalk-78 code sitting on top of about 6K bytes of machine code (the latter was emulated to get the whole system going).

I think what happened is that the early styles of programming, especially "data structures, procedures, imperative munging, etc." were clung to, in part because this was what was taught, and the more design-intensive but also more compact styles developed at Parc seemed very foreign. So when C++, Java, etc. came along the old styles were retained, and classes were relegated to creating abstract data types with getters and setters that could be munged from the outside.

Note that this is also "simulation style programming" but simulating data structures is a very weak approach to design for power and scaling.

I think the idea that all entities could be protected processes (and protected in both directions) that could be used as communicating modules for building systems got both missed and rejected.

Of course, much more can and should be done today more than 40 years after Parc. Massive scaling of every kind of resource requires even stronger systems designs, especially with regard to how resources can be found and offered.


Are you the Alan Kay. Is there any way we can verify this is you? The HN user account seems to have a very low "karma" rating, so one can't help but be more suspicious.


I'm the "computing Alan Kay" from the ARPA/Parc research community (there's a clarinettist, a judge, a wrestler, etc.) I did create a new account for these replies (I used my old ARPA login name).


It's really cool that you weigh in on discussions on HN. Or I suppose it feels like that to me primarily because I grew up reading your quotes in info text boxes in programming texts. And it's cool to have that person responding to comments.


It’s a new account created yesterday. Alan Kay did an AMA here a while back+ with the username “alankay1” and occasionally posted elsewhere. That account’s last post was 7 months ago. Given that user “Alan-1”s style and content is similar, it seems likely that he created a new account after half a year away from HN.

If you want verification, maybe you can convince him to do another AMA =) I’m still thinking about his more cryptic answers from the last one, which is well worth a read. I think that was before Dynamicland existed, but I may be off.

+ https://news.ycombinator.com/item?id=11939851


That's what he claims but there's zero evidence besides his own word.


Well, how large is the pool of other possible candidates? Wouldn't someone from that time period (say the Simula folks, or another PARC employee) challenge that assertion? Why would he like?


Ah, so you have evidence that it was somebody else?


I don't, but that's not how the burden of proof works.


Every source I've ever come across on this topic (and I work in PL research) points to Kay as the originator of the term "object-oriented" in relation to programming. No exceptions.

You are now making an affirmative assertion that Alan Kay did not coin the term. The burden of proof is on you, not him.


Link these sources, then! Even someone who recently interviewed him and researched the subject for months confessed he could never corroborate that claim.

You make the claim he coined the term, the burden of proof is on you.

Until you do, it's perfectly reasonable and intellectually honest to reject that claim.


Sure, it's impossible to corroborate at this point because there's no direct evidence of it. It's not like he wrote it in a mailing list that we still have access to. It was (according to what I've read about it) a verbal statement made in response to a question asked of him by someone else. I don't know who the other person is, though perhaps that would be a place to look.

References I've seen have, of course, essentially all pointed back to Kay's claims. I imagine this is insufficient in your eyes, so I won't bother finding them for you.

Arguing "it's reasonable and intellectually honest to reject [the claim that Kay coined the term]" is silly. It's not reasonable, because there's no real reason to suspect the claim to be false in the first place. For 50+ years it has been accepted knowledge that Kay coined the term. Nobody — including people with direct experience on the same teams or with otherwise opposing claims — has stepped forward to dispute this fact in all that time. This would be just like saying "Well I don't think da Vinci really made the Mona Lisa. I mean, all we have is his word for it. Sure, the painting didn't exist before him, and its existence appears to have started with him, and people at the time attribute its existence to him, but for all we know maybe somebody else did it and gave it to him to use as his own!" Sure, it's possible... but it's a silly claim to make (and hence not reasonable).

Your position is not "intellectually honest" because it sincerely looks like you're just trying to be antagonistic. What's the point in arguing that Kay didn't coin the term? Do you have some unsung hero in mind you'd like to promote as the coiner? Or do you just like arguing against commonly-held beliefs for the sake of it? I don't see what you're trying to accomplish.

Two more thoughts:

1. The only way to prove Kay didn't originally coin the term would be to find hard evidence of it used in a similar fashion (i.e., with regard to programming) from prior to 1966 (the time Kay claims he invented the term).

2. If you had such evidence, you would need to prove that Kay had seen it prior to his alleged coinage. In the absence of such proof, the existence of the term prior to Kay's use would be irrelevant. Why? Because the community as a whole has gone off of Kay's claim for the whole time. If somebody else conceived of "object-oriented programming", we didn't get it from them — we got it from Kay.


Alan Kay responded above so...


Link at [0].

I'm a little skeptical. That user certainly writes in a similar style to how I've seen Alan Kay write online, but I wouldn't be opposed to seeing some more proof. A one-day-old HN account claiming to belong to one of the most important people in CS from the past 50 years seems a little suspicious haha.

[0] https://news.ycombinator.com/item?id=19717640


An interesting and unfortunately true commentary on the lack of civilized behavior using technology that actually required a fair amount of effort -- and civilized behavior -- to invent in the first place.


Yeah, it's definitely disappointing that we have to worry about things like that, but that's the nature of the beast I guess. I hope you don't take any offense at my skepticism! For what it's worth, I'm happy assuming you're the real deal because being a cynic all the time is no fun and I have no specific reason to believe otherwise at the moment; I just also wouldn't be surprised to discover it's fake haha.

Also, I walk by your face a few times a week whenever I head into my lab. MEB has redecorated a few times over the years, but they always have a section of pictures of notable alumni and (of course) you're up there. Thanks for giving us a good name in the field and for all you've done!


Merrill Engineering Building! I'm glad it is still around. Those long hallways were used as a "display" to unroll the many pages of Simula machine code listings down one corridor so that three grad students -- including me -- could crawl over it and coordinate to try to understand just what Simula might actually be (the documentation in Norwegian that had been transliterated into English was not understandable).


I'd be really interested to hear what you think they missed, because I find your claim to be surprising and a bit preposterous.


Armstrong wrote the very famous "Why OO sucks" and then a decade or two later, changed his mind when he saw how successful OO was, and then tried to retrofit Erlang into an OO language. Not by changing Erlang, but by twisting the definition of OOP so that Erlang would fit it.


That isn't what happened at all (see the rebuttal by revvx). Joe was a great guy and also a great systems thinker. And he was the last person to worry about "bandwagons" (quite the opposite!)


I don't think that's what happened.

Joe Armstrong was criticizing C++-style OOP when he wrote his critique.

After he learned more about Alan Kay's view on OOP, he decided that Erlang is closer to Alan Kay's OOP and he approves that specific flavor of OOP.

He didn't change his stance based on popularity. He changed his stance because in the 80s/90s the term "OOP" was synonymous with C++-style-OOP, but that changed in the 2000s thanks to 1) C++-style OOP criticism became commonplace in our industry (thanks to people like Joe Armstrong) and 2) an increase of popularity languages like Ruby and Objective-C (which are closer to Smalltalk) and even much-maligned concepts such as DCOM, SOA and CORBA.


He doesn't even mention C++ in his essay [1], but regardless, the C++ OOP is pretty much the mainstream OOP, which we still use today in Java, Kotlin, C#, etc...

And... no, the change in mindset about OOP never happened. Kay and Armstrong's view of OOP never took on. Today, OOP is still not seen as message passing and mostly seen as polymorphism, parametric typing, classes/traits/interfaces, and encapsulation. The complete opposite of what Erlang is.

[1] http://harmful.cat-v.org/software/OO_programming/why_oo_suck...


I'm the one mentioning C++. To anyone familiar with both styles, Joe Armstrong is clearly not talking about Smalltalk-style OOP in his essay, he's talking about C++/Java/etc style. And later on he only praised Smalltalk-style OOP.

And sorry, by a "change in mindset in our industry regarding OOP" I mean that it became commonplace to criticize C++-style OOP. Not that everyone stopped programming in that style. Maybe there's a better way to phrase it?


Again, please do your homework.


"seen as" is the key here. "The masses" ultimately usually get to define terms, for good or bad. The gestalt or "feel" of what OOP "is" is often shaped by common languages and their common usage, again for good or bad.

It may be better to define specific flavors or aspects of OOP or OOP-ish things and discuss them in isolation with specific scenarios. That way the messy issue of canonical definition(s) doesn't come into play as often.

It would then be more of "hey, here's a cool feature of Language X or System X! Look what it can do...". Whether it's canonical or not is then moot.


This is way off. Please try to do more homework.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: