It really should be noted that years later Joe changed his mind about OO and came to the realization that perhaps Erlang is the only object-oriented language :) From a 2010 interview:
..."I wrote a an article, a blog thing, years ago - Why object oriented programming is silly. I mainly wanted to provoke people with it. They had a quite interesting response to that and I managed to annoy a lot of people, which was part of the intention actually. I started wondering about what object oriented programming was and I thought Erlang wasn't object oriented, it was a functional programming language.
Then, my thesis supervisor said "But you're wrong, Erlang is extremely object oriented". He said object oriented languages aren't object oriented. I might think, though I'm not quite sure if I believe this or not, but Erlang might be the only object oriented language because the 3 tenets of object oriented programming are that it's based on message passing, that you have isolation between objects and have polymorphism.
Alan Kay himself wrote this famous thing and said "The notion of object oriented programming is completely misunderstood. It's not about objects and classes, it's all about messages". He wrote that and he said that the initial reaction to object oriented programming was to overemphasize the classes and methods and under emphasize the messages and if we talk much more about messages then it would be a lot nicer. The original Smalltalk was always talking about objects and you sent messages to them and they responded by sending messages back."
That speaks to one of the things that bothers me about OOP's intellectual traditions: there are two different ideas of what "object" can mean, and most object-oriented languages and practices deeply conflate the two.
On the one hand, "object" can mean a unification of data structures with the procedures that act on them. In this view, the ideal is for everything to be an "object", and for all the procedures to actually be methods of some class. This is the place from which we get both the motivation for Java's ban on functions that don't belong to classes, and the criticism of Java as not being truly OO because not every type is an object. In this view, Erlang is not OO, since, at the root, functions are separate from datatypes.
On the other hand, "object" can describe a certain approach to modularity, where the modules are relatively isolated entities that are supposed to behave like black boxes that can only communicate by passing some sort of message back and forth. This ends up being the motivation for Java's practice of making all fields private, and only communicating with them through method calls. In this view, Erlang is extremely OO, for all the reasons described in parent.
I haven't done an exhaustive analysis or anything, but I'm beginning to suspect that most the woes that critics commonly describe about OO come from the conflation of these two distinct ideas.
I haven't, but, at least insofar as my thinking has developed (and insofar as Erlang supports it), the question of inheritance is more orthogonal than essential to the specific point I was trying to make. And failed to state clearly, so here it is: This essay is right, and Armstrong is also right when he said "Erlang might be the only object-oriented language". The tension there isn't, at the root, because Armstrong was confused about what OOP is really about; it's because OOP itself was (and is) confused about what OOP is really about.
That said, I would also argue that, like "object", "inheritance" is a word that can describe many distinct concepts, that and here, too, OOP's intellectual traditions create a muddle by conflating them.
Inheritance is a limited convention to do mixins. Including it in the abstract idea of object oriented programming is harmful, other than in reference to the ugly history of "Classical OOP" or "Non-Kay OOP" as you like.
“I mainly wanted to provoke people...” I hate this. I see it way too often. It’s either a cop-out to avoid having to own up to your arguments or its just poisonous rhetoric in the first place that contributes to partisan opinions, especially when the speaker has an air of authority that causes people to accept what they say at face value. It is directly antithetical to critical thinking.
> It is directly antithetical to critical thinking.
I don't think it is.
Yes, it can get some people to just lash out in response.
But it also often forces people to think critically about how to convincingly justify their own standpoint to counter the provocation. This can be particularly useful when a viewpoint has "won" to the extent that people just blindly adopt it without understanding why.
It does have it's problems in that it is hard to predict, and there's a risk that measured reactions gets drowned out by shouting, so I'm not going to claim it's a great approach, but it has it's moments.
True, I can see how in this case, at that time, it could be effective. But ironically, there seems to be a similar dogma surrounding FP these days - speaking even as a fan of the paradigm, with a perspective tempered by experience. I can’t help but think that polarized viewpoints like this contribute to replacing the subject of the idealization rather than the underlying problem of idealizing itself, if only indirectly due to the combination of the arguments themselves and the sense of authority behind them, rather than the merit of the arguments alone.
> Isn't a method call a message, and the return value a message back?
It is!
In my view, the point that Alan Kay and Joe Armstrong are trying to make is that languages like C++/Java/C# etc have very limited message passing abilities.
Alan Kay uses the term "late binding". In Kay's opinion, "extreme late binding" is one of the most important aspects of his OOP [1], even more important than polymorphism. Extreme late binding basically means letting the object decide what it's gonna do with a message.
This is what languages like Objective-C and Ruby do: deciding what to do after a method is dispatched always happen during runtime. You can send a message that does not exist and have the class answer to it (method_missing in Ruby); you can send a message to an invalid object and it will respond with nil (Objective-C, IIRC); you can delegate everything but some messages to a third object; you can even send a message to a class running in other computer (CORBA, DCOM).
In C++, for example, the only kind of late binding that you have is abstract classes and vtables.
-
> Or is it that "true OO" must be asynchronous?
It doesn't have to be asynchronous, but in Alan Kay's world, the asynchronous part of messaging part should be handled by that "dispatcher", rather than putting extra code in the sender or the receiver.
I don't remember Alan Kay elaborating on it, but he discusses a bit about this "interstitial" part of OOP systems in [2]
C++'s vtable is also late binding, since you don't know which implementation you're calling until runtime. And there's no such thing as "extremely late binding".
> In C++, for example, the only kind of late binding that you have is abstract classes and vtables.
That's not true, you can always have a "send_message(string id)". Few people do it because you lose static type safety. And some languages, like C# and Scala, have dynamic types that allows for the "method_missing" protocol and such features are very unpopular.
To be honest I don't see much of a difference. I've worked with a lot of dynamic OOP languages, including with Erlang-style actors and I've never seen the enlightenment of dynamic OOP message passing.
And I actually like OOP, but I don't really see the point of all this hyperbole about Smalltalk.
> That's not true, you can always have a "send_message(string id)". Few people do it because you lose static type safety. And some languages, like C# and Scala, have dynamic types that allows for the "method_missing" protocol and such features are very unpopular.
That is the difference. If every class in C++ had only one method - send_message and each object is an independent thread, you will get how Erlang works. That is how you would do the actor model in C++.
Inheritance, Polymorphism is emphasised in Java, C++ and C#, whereas Functional programmers emphasise function objects / lambdas / Command Pattern where you just have one method - calling the function. Infact having just method you no longer need Polymorphism / Interfaces.
> C++'s vtable is also late binding, since you don't know which implementation you're calling until runtime. And there's no such thing as "extremely late binding".
C++'s vtables are determined at compile time. The specific implementation executed at a given moment may not be possible to deduce statically, but the set of possible methods is statically determined for every call site: It consists of the set of overridden implementations of the method with that name in the class hierarchy from the named type and downwards.
No such restriction exists in Ruby or Smalltalk or most other truly dynamic languages. E.g. for many Ruby ORM's the methods that will exist on a given object representing a table will not be known until you have connected to the database and read the database schema from it, and at the same time I can construct the message I send to the object dynamically at runtime.
Furthermore the set of messages a given object will handle, or which code will handle it can change from one invocation to the next. E.g. memoization of computation in Ruby could look sort-of like this:
class Memo
def method_missing op
result = ... execute expensive operation here ...
define_singleton_method(op) { return result }
end
end
After the first calculation of a given operation, instead of hitting method_missing, it just finds a newly created method returning the result.
"Extreme late binding" is used exactly because people think things like vtables represent late-binding, but the ability to dynamically construct and modify classes and methods at runtime represents substantially later binding.
E.g. there's no reason why all the code needs to be loaded before it is needed, and methods constructed at that time And incidentally this is not about vtables or not vtables - they are an implementation detail. Prof. Michael Franz paper on Protocol Extension [1] provided a very simple mechanism for Oberon that translates nicely to vtables by dynamically augmenting them as code is loaded at runtime. For my (very much incomplete) Ruby compiler, I use almost the same approach to create vtables for Ruby classes that are dynamically updated by propagating the changes downwards until it reaches a point where the vtable slot is occupied by a different pointer than the one I'm replacing (indicating the original method has been overridden). Extending the vtables at runtime (as opposed to adding extra pointers) would add a bit of hassle, but is also not hard.
The point being that this is about language semantics in terms of whether or not the languages allows changing the binding at runtime, not about the specific method used to implement the method lookups semantics of each language - you can implement Ruby semantics with vtables, and C++ semantics by a dictionary lookup. That's not the part that makes the difference (well, it affects performance)
> That's not true, you can always have a "send_message(string id)". Few people do it because you lose static type safety. And some languages, like C# and Scala, have dynamic types that allows for the "method_missing" protocol and such features are very unpopular.
If you're working in a language with static typing you've already bought into a specific model; it's totally unsurprising that people who have rejected dynamic typing their language choice will reject features of their statically typed language that does dynamic typing. I don't think that says anything particularly valuable about how useful it is. Only that it is generally a poor fit for those types of languages.
The only good thing about OO as a architecture is, that there is nearly no education required to introduce it to the most novice in the field. Its basically the default thinking approach rebranded.
It comes with all the benefits of a mental model - quick orientation, and all the negative of a mental model.
(Badly adapted to fit to machine execution, after acertain complexity level is reached - god like actorobjects - basically programmers in softwaredisguise- start to appear).
Disagree. The original design patterns book was really about ways oop should be used that don't fit people's everyday conception of objects. (Of course that causes different problems for the novice keen to use the patterns but that's another story)
My first exposure to the actor model was with Akka on Scala. After working with it for a little while, I thought "this is what OOP should be, perhaps I just hate broken implementations of OOP (i.e., Java, C++), rather than OOP itself." Heck, I like Ada95's implementation of OOP better than Java's.
I keep meaning to give Erlang a try, but just haven't had a reason yet. I do a lot of Clojure, these days :)
what does late binding by you? That sounds like an argument for non-strictly typed languages. Isn't in the strict typing that prevents late binding? The compiler wants to know at compile time the types of all the messages and whether or not an object can handle that message hence all messages must be typed and all object must declare which messages they accept.
- Abstract classes/methods, and interfaces. This is implemented using vtables in C++.
- Ability to send messages asynchronously, or to other computers, without exposing the details of such things. You just call a method in another class and let your dispatcher handle it. There was a whole industry built around this concept in the 90s: CORBA, DCOM, SOAP. And Erlang, of course, in a different way.
- Ability to change the class/object during runtime. Like you can with Javascript and Lua, calling `object.method = `. Javascript was inspired by Self (a dialect of Smalltalk), so there's that lineage. Other languages like Python and Ruby allow it too.
- Ability to use the message passing mechanism to capture messages and answer them. Similar to Ruby's "method_missing" and ES6 Proxies in Javascript. This is super useful for DSLs and a great abstraction to work with. Check this out: http://npmjs.com/package/domz
Remember that you can have some of those things without dynamic typing (Objective-C).
a) The crash is from the default unhandled exception handler, which will send a signal to abort. So if you just want to crash, you can either handle that particular exception or install a different unhandled exception handler
b) An object gets sent the -forwardInvocation: message when objc_msgSend() encounters a message the object does not understand. The exception above gets raised by the default implementation of -forwardInvocation: in NSObject.
o := NSObject new.
o class
-> NSObject
n := NSInvocation invocationWithTarget:o andSelector: #class
n resultOfInvoking class
-> NSObject
o forwardInvocation:n
2019-04-22 07:49:12.339 stsh[5994:785157] exception sending message: -[NSObject class]: unrecognized selector sent to instance 0x7ff853d023c0 offset: {
(This shows that -forwardInvocation: in NSObject will raise that exception, even if the NSInvocation is for a message the object understands)
If you override -forwardInvocation:, you can handle the message yourself. In fact, that is the last-ditch effort by the runtime. You will first be given the chance to provide another object to send the message to ( - (id)forwardingTargetForSelector:(SEL)aSelector; ) or to resolve the message in some other way, for example by installing the method ( + (BOOL)resolveInstanceMethod:(SEL)sel; )[0].
Cocoa's undo system is implemented this way[1], as is Higher Order Messaging[2][3]
NextStep adopted it but did not invent it. Once Apple acquired NextStep and released OS X they were the only major company supporting it and had defacto control over the language.
The complaint I have is with NSObject which can be blamed on Next Step. Although another comment pointed out I just didn’t know about a workaround.
There were two different major mutually-incompatible “flavors” of Objective-C (my first book on Objective-C covered both, and my first Objective-C programming was done on a NeXTcube), one of which originated at NeXT (NextStep was the OS that was NeXTs last major surviving product after they dropped hardware, not the company.)
Extreme Late binding: for "The Pure Function Pipeline data Flow", attaching data or metadata to the data flow, then the pipeline function parses it at run time, which is simpler, more reliable, and clearer.
C++, Java etc. all lack proper union types with appropriate pattern matching. So a lot of useful message passing patterns cannot be implemented without too much boilerplate.
I think the spirit of OO, an object has agency over how the message is interpreted in order for it to be considered a message. If the caller has already determined for the object that it is going to call a method then the object has lost that agency. In a 'true OO' language an object may choose to invoke a method that corresponds to the details within the message, but that is not for the caller to decide.
Consider the following Ruby code:
class MyClass
def foo
'bar'
end
end
class MyClass
def method_missing(name, *args, &block)
if name == :foo
return 'bar'
end
super
end
end
To the outside observer, the two classes are effectively equivalent. Since, conceptually, a caller only sends a message `foo`, rather than calling a method named `foo`, the two classes are able to make choices about how to handle the message. In the first case that is as simple as invoking the method of the same name, but in the second case it decides to perform a comparison on the message instead. With reception of a message, it is free to make that choice. To the caller, it does not matter.
If the caller dug into the MyClass object, found the `foo` function pointer, and jumped into that function then it would sidestep the message passing step, which is exactly how some languages are implemented. In the spirit of OO, I am not sure we should consider such languages to be message passing, even though they do allow methods to be called.
vtables is an implementation detail. To compile Ruby with vtables, consider this:
class A
def foo; end
end
class B < A
def foo; end
def bar; end
end
Now you make a vtable for class A that looks conceptually something like this:
slot for foo = address_of(A#foo)
slot for bar = method_missing_thunk(:bar)
And a vtable for class B that looks like this:
slot for foo = address_of(B#foo)
slot for bar = address_of(B#bar)
The point being that you can see every name used in a method call statically during parsing, and can add entries like `method_missing_thunk(:bar)` to the vtable, that just pushes the corresponding symbol onto the stack and calls a method_missing handler that tries to send method_missing to the objects.
You still need to handle #send, but you can do that by keeping a mapping of symbols => vtable offset. Any symbol that is not found should trigger method_missing; that handles any dynamically constructed names, and also allows for dynamically constructed methods with names that have not been seen as normal method calls.
When I started experimenting with my Ruby compiler, I worried that this would waste too much space, since Ruby's class hierarchy is globally rooted and so without complicated extra analysis to chop it apart every vtable ends up containing slots for every method name seen in the entire program, but in practice it seems like you need to get to systems with really huge amounts of classes before it becomes a real problem, as so many method names gets reused. Even then you can just cap the number of names you put in the vtables, and fall back to the more expensive dispatch mechanism for methods you think will be called less frequently.
(redefining methods works by propagating the new pointer downwards until you find one that is overridden - you can tell it's overridden because it's different than the pointer at the site where you started propagating the redefined method downwards; so this trades off cost of method calls with potentially more expensive method re-definition)
What is the advantage of doing that instead of using an IObservable that can filter on the event name in C# or, even better in F#, having an exhaustive pattern match that automatically casts the argument to the expected type and notifies you at compile time if you forgot to handle some cases?
In Kay's OO the only way to interact with an object was through method passing. It was important the the internal state of an object was kept private at all times.
Getters/setters are technically message-passing methods, but they undermine the design goal because they more or less directly expose internal state to the public world.
But we see getters/setters used constantly. People don't use OO in the way Kay intended. Yes, methods are the implementation of the whole "message passing" thing Kay was talking about, but we see them used in ways he did not intend.
Maybe I am a complete philistine but is that really a bad thing or just something which goes against their categorism? I get that there are some circumstances where setters would break assumptions but classes are meant to be worked with, period.
Objects are meant to have a life cycle in which the state should only be changed by the object itself. Setters violate this idea by allowing the sender of the message direct control over the state of the object.
A simplistic example: account.deposit(100) may directly add 100 to the account's balance and a subsequent call to account.balance() may answer 100 more than when account.deposit(100) was called. But those details are up to that instance of the account not the sender of those messages. The sender should not be able to mutate account.balance directly, whether it be via direct access to the field or through the proxy of a setter.
Well... a setter is the object changing its own state. That's why the setter has to be a member function.
I would say instead that an object shouldn't have setters or getters for any members unless really necessary. And by "necessary", I don't mean "it makes it easier to write code that treats the object as a struct". I mean "setting this field really is an action that this object has to expose to the external world in order to function properly". And not even "necessary" because I coded myself into a corner and that's the easiest way out I see. It needs to be necessary at the design level, not at the code level.
It depends, most of the time is better to have separate functions that transform your data rather than have methods and state conflated together. But obviously it depends from the context.
Yeah there are no hard and fast rules but a lot of time transformations can be in the object as well. If I need a function to transform Foo to Bar I could just as easily send a toBar() message to an instance of Foo.
I think c# really got the best of both worlds with extensions methods, where you can actually define functions that act on a object but are separated from the actual class definition.
I still think that pure functions and especially higher kinded types are better probably, although I have no direct experience with Haskell type classes, scala implicits and ocaml modules..
It's not exactly a bad thing, it's just that you're using a hammer (class) when what you actually need is a screwdriver (struct/record).
Abusing getters/setters is breaking encapsulation (I said abusing, light use is ok). If you're just going to expose all the innards of the class, why start with a Class?
The whole point of object orientation to put data and behavior together. That's probably the only thing that both the C++/Java and the Smalltalk camp agrees on.
Separating data and the behavior into two different classes breaks that. You're effectively making two classes, each with "half of a responsibility". I can argue that this breaks SRP and the Demeter principle in one go.
Another thing: Abuse of getters/setters is often a symptom of procedural code disguised as OOP code. If you're not going to use what is probably the single biggest advantages of OOP, why use it at all?
-
Here's an answer that elaborates on this that I like:
> The whole point of object orientation to put data and behavior together
May I politely disagree based on my long-ago experience with dylan, which has multi-methods (<https://en.wikipedia.org/wiki/Multimethods>). This allowed the action on the data (the methods) to be defined separate from the data. I strongly feel that it was OO done right, and it felt right.
You can read about it on the wiki link but it likely won't click until you play with it.
I'd like to give an example but it's too long ago and I don't have any to hand, sorry.
It’s a different semantic in my opinion.
Even in mutable objects it’s better to have setters that act only on the field that they are supposed to mutate and do absolutely nothing else.
If you need a notification you can raise an event and then the interested parties will react accordingly.
By mutating directly an unrelated field, or even worse, call an unrelated method that brings complete havoc to the current object state, in the setter you are opening yourself to an incredible amount of pain.
I disagree, slightly. A setter (or any method, for that matter) has to keep the object in a consistent state. If it can't set that one field without having to change others, then it has to change others.
Now, if you want to argue that an object probably shouldn't be written in the way that such things are necessary, you're probably right. And if you want to argue that it should "just set the one field in spirit" (that is, that it should do what it has to to set the field, but not do unrelated things), I would definitely agree with you. But it's not quite as simple as "only ever just set the one field".
> Getters/setters are technically message-passing methods, but they undermine the design goal because they more or less directly expose internal state to the public world.
No, they don't, because “more or less” is not actually directly. Particularly, naive getters and setters can be (and often are) replaced with more complex behavior with no impact to consuming code because they are simply message handlers, and they abstract away the underlying state.
> No, they don't, because “more or less” is not actually directly.
I disagree.
Consider a `Counter` class, intended to be used for counting something. The class has one field: `Counter.count`, which is an integer.
A setter/getter for this field would be like `Counter.setCount(i: Int)` and `Counter.getCount() -> Int`. There is no effective difference between using these methods and having direct access to the internal state of the object.
A more "true OOP" solution would be to use methods with semantic meaning, for example: `Counter.increment()`, `Counter.decrement()`, and `Counter.getCount() -> Int`. (Yes, the getter is here because this is a simple example.) These kinds of methods are not directly exposing the internal state of the object to be freely manipulated by the outside world.
If your getter/setter does something other than just get/set, then it's not really a getter/setter anymore — it's a normal method that happens to manipulate the state, which is fine. But using getters/setters (in the naive, one-line sense) is commonplace with certain people, and I feel that their use undermines the principles Kay was getting at.
I have seen side effects for completely unrelated fields in setters. Heck, I’ve even witnessed side effects in bloody getters.
This is the reason why now I’m a huge fan of immutable objects.
Actually nowadays I became a fan of functional languages with first class immutability support.
> but they undermine the design goal because they more or less directly expose internal state to the public world.
This has always been my problem with getters and setters. It's a way of either pretending you are not or putting bandaids on the fact that you're messing with the objects internal state. For objects with dynamic state this is really bad. The result is racy or brittle.
> Getters/setters are technically message-passing methods, but they undermine the design goal because they more or less directly expose internal state to the public world
If they do, that's your fault for letting them. I guess you mean when people chain stuff thus
company.programmers.WebDevs.employ('fred')
where .programmers and .WebDevs is an exposed internal of the company and programmers department respectively? (I've seen lots of this, and in much longer chains too. We all have). In which case please see the Principle of Demeter <https://en.wikipedia.org/wiki/Law_of_Demeter> which says don't do this. Wiki article is good.
I doubt any language can prevent this kind of 'exposing guts' malpractice, it's down to the humans.
> I doubt any language can prevent this kind of 'exposing guts' malpractice
Actually, true OOP languages do prevent this. Internal state is completely private and cannot be exposed externally. The only way to interact with an object's state is through its methods — which means the object itself is responsible for knowing how to manipulate its internal state.
Languages like Java are not "true" OOP in this sense, because they provide the programmer with mechanisms to allow external access to internal state.
Internal state should be kept internal. You shouldn't have a class `Foo` with a private internal `.bar` field and then provide public `Foo.getBar()` and `Foo.setBar()` methods, because you may as well just have made the `.bar` field public in that case.
Also, FWIW, I did not downvote you. I dunno why you were downvoted. Seems you had a legitimate point here, even if I disagree with it.
I'm not sure that's a proven model. It's a proposed model, for sure. Since you can't protect memory from runtime access, you can't really protect state, so it's a matter of convention which Python cleverly baked in (_privatevar access).
Ah sorry, I was speaking in the context of Kay's OOP! In retrospect my phrasing made it seem like I was stating an opinion as fact, but what I meant was just that Kay's OOP mandated that internal state could not be exposed and was very opinionated on the matter.
When I think of message passing, I think of message queues. There should be an arbiter, a medium of message passing so you can control how that message is passed and how it will arrive.
Java and C++ way of message passing both stripped that medium down to a simple vtable to look up what methods the object has. Erlang and go have the right idea of passing messages through a medium that can serialize and multiprocess it. C# tries to do with further abstractions like parallelized linq queries and C#, python and nodejs use async/await to delegate the messages to event queues. Python can also send messages to multiple processes. All this shows us that message passing requires a medium that primitive method calls lack.
>both stripped that medium down to a simple vtable to look up what methods the object has.
If they use vtable it'd be just slow. Not needing the trampoline and ability to inline harder is what makes it fast. The usual case is class hierarchy analysis,static calls (no more than a single implementer proven by the compiler), guarded calls (check +inline, java deoptimizes, if need be), bi-morphic call site inline, inline caches and if that fails - the vtable thing.
Message passing in a classical way is just awfully slow for a bottom of the stack building block. It doesn't map to the hardware.
It does makes sense for concurrency with bounded, lock free queues (actor model). But at some point, someone has to do the heavy lifting.
I suppose C++-style method calls are a limited form of OO, without asynchronicity, running in independent threads when required, no shared state, ability to upgrade or restart a failed component...
No, it does not have to be async.
My impressions from using Squeak regarding this matter:
1. You can send any message to any object. In case the object does not have a suitable handler, you will get an exception: <object> does not understand <message>. The whole thing is very dynamic.
> It really should be noted that years later Joe changed his mind about OO and came to the realization that perhaps Erlang is the only object-oriented language :)
But not in the way he's describing OO in his blog post. He's talking about a language with functions bound to objects and where objects have some internal state. The OO he's describing does not have isolation between objects because you can share aliases freely; references abound.
Nobody can agree on what OOP really is. I've been in and seen many long debates on the definition of OOP. It's kind of a like a Rorschach test: people project their preferences and biases into the definition.
Until some central body is officially appointed definition duty, the definition debate will rage on.
Is this different from ANY other concept in technology? Personal Computing, Big Data, Cloud Computing, Deep Learning, Artificial Intelligence? We never have real definitions for any of these, and if you attempt to make one it will be obsolete before you finish your blog post.
The only real problem I see is that too many technologist insist that there is 'one definition to rule them all' and it's usually the one they most agree with. As long as we all understand that these terms are fluid and can explain the pro's and con's of our particular version we will be fine.
If pretty much every single implementation of OO languages misunderstood Kay that just means Kay either didn't explain himself well or OO as he intended it is so easy to misunderstand it's almost useless as a programming paradigm. At this point, it really doesn't matter anymore. OO is what OO languages like C++ and Java have made it. The original author in no way has a monopoly or even a privileged viewpoint in the matter. And frankly, I agree with the original article. OO is very poor and leads to a lot of misunderstandings because it has a lot of problems in its core design. It "sucks." It never made much sense to me and clearly it never made much sense to even the people designing languages such as C++ or Java because it's taken decades to come up with somewhat useful self-imposed limitations and rules on how to use OO to not come up with an ugly mess. It's completely unintuitive and out of the box misleads just about every beginner who tries to use it. A programming paradigm should make it obvious how it's supposed to be used but OO does the opposite. It obfuscates how it should be used in favor of paradigms like inheritance that lead users down a path of miser and pain due to complexity and dead ends that require rewriting code. In most cases, it's mostly a way to namespace code in an extremely complicated and unintuitive manner. And we haven't even touched the surface as to its negative influences on data structures.
Both Alan Kay and Joe Armstrong struck me as having had the same attitude of trying to capitalize on the topic of object oriented programming, failing to recognize its importance, and then later trying to appropriate it by redefining it.
Not the best moment of these otherwise two bright minds.
He coined the term “object,” but what he meant by a computational object was different than what it came to mean: a data structure with associated operations upon it. Kay meant a parallel thread of execution which was generally sitting in a waiting state—one could make a very strong analogy between Smalltalk's vision of “objects” and what we call today “microservices,” albeit all living within the same programming language as an ecosystem rather than all being independent languages implementing some API.
But whether this is an “object-oriented” vision depends on whether you think that an object is intrinsically a data structure or an independent computer with its own memory speaking a common API. The most visible difference is that in the latter case one object mutating any other object's properties is evil—it is one computer secretly modifying another’s memory—whereas in the other case it is shrug-worthy. But arguably the bigger issue is philosophical.
That is hard to explain and so it might be best to have a specific example. So Smalltalk invents MVC and then you see endless reinventions that call themselves MVC in other languages. But most of these other adaptations of MVC have very object-oriented models: they describe some sort of data structure in some sort of data modeling language. But that is not the “object” understanding of a model in Smalltalk. When Smalltalk says “model” it means a computer which is maintaining two things: a current value of some data, and a list of subscribers to that value. Its API accepts requests to create/remove subscriptions, to modify the value, and to read the value. The modifications all send notifications to anyone who is subscribed to the value. There is not necessarily anything wrong with data-modeling the data, but it is not the central point of the model, which is the list of subscribers.
A more extreme example: no OOP system that I know of would do something as barbarous as to implement a function which would do the following:
> Search through memory for EVERY reference to that object, and replace it with a reference to this object.
That just sounds like the worst idea ever in OOP-land; my understanding of objects is as data structures which are probably holding some sort of meaningful data; how dare you steal my data structure and replace it with another. But Smalltalk has this; it is called Object.become. If you are thinking of objects as these microservicey things then yeah, of course I want to find out how some microservice is misbehaving and then build a microservice that doesn't misbehave that way and then eventually swap my new microservice in for the running one. (That also hints at the necessary architecture to do this without literally scanning memory: like a DNS lookup giving you the actual address, every reference to an object must be a double-star pointer under the hood.) And as a direct consequence, when you are running Smalltalk you can modify almost every single bit of functionality in any of the standard libraries to be whatever you need it to be, live, while the program is running. Indeed the attitude in Smalltalk is that you will not write it in some text editor, but in the living program itself: the program you are designing is running as you are writing it and you use this ability to swap out components to massage it into the program that you need it to become.
I didn't coin the term "object" -- and I shouldn't have used it in 1966 when I did coin the term "object-oriented programming" flippantly in response to the question "what are you working on?".
This is partly because the term at the time meant a patch of storage with multiple data fields -- like a punched card image in storage or a Sketchpad data-structure.
But my idea was about "things" that were like time-sharing processes, but for all entities. This was a simple idea that was catalyzed by seeing Sketchpad and Simula I in the same week in grad school.
The work we did at Parc after doing lots of software engineering to get everything to be "an object", was early, quite successful, and we called it "object-oriented programming".
I think this led to people in the 1980s wanting to be part of this in some way, and the term was applied in ways that weren't in my idea of "everything from software computers on a network intercommunicating by messages".
I don't think the term can be rescued at this point -- and I've quit using it to describe how we went about doing things.
It's worth trying to understand the difference between the idea, our pragmatic experiments on small machines at Xerox Parc, and what is called "OOP" today.
The simplest way to understand what we were driving at "way back then" was that we were trying to move from "programming" as it was thought of in the 60s -- where programs manipulated data structures -- to "growing systems" -- like Smalltalk and the Internet -- where the system would "stay alive" and help to move itself forward in time. (And so forth.)
The simplest way to think about this is that one way to characterize systems is that which is "made from intercommunicating dynamic modules". In order to make this work, one has to learn how to design and maintain systems ...
I was really not expecting you to join this conversation and I am very thankful to have crossed paths with you, even so briefly. Sorry for getting you wrong about the “objects” vs. “OOP” thing.
I have thought you could maybe call it “node-oriented” or “thread-oriented” but after reading this comment I think “ecosystem-oriented” might be more faithful a term?
I think the inspiration from Simula I is something a lot of folks either don't know about, or maybe they know about it but don't recognize its significance. Objects with encapsulated state that respond to well-defined messages are a useful level of abstraction for writing simulations of the sort Simula was built for. They're just not automatically a particularly wieldy abstraction for systems that aren't specifically about simulation. Some (most?) of that is about the skill of the programmer, imo, not some inherent flaw in the abstraction itself.
P.S.: Thank you for all your contributions to our profession, and for your measured response to these kinds of discussions.
It's out of the context of this thread, but we were quite sure that "simulation-style" systems design would be a much more powerful and comprehensive way to create most things on a computer, and most especially for personal computers.
At Parc, I think we were able to make our point. Around 2014 or so we brought back to life the NoteTaker Smalltalk from 1978, and I used it to make my visual material for a tribute to Ted Nelson. See what you think.
https://www.youtube.com/watch?v=AnrlSqtpOkw&t=135s
This system --including everything -- "OS", SDK, Media, GUI, Tools, and the content -- is about 10,000 lines of Smalltalk-78 code sitting on top of about 6K bytes of machine code (the latter was emulated to get the whole system going).
I think what happened is that the early styles of programming, especially "data structures, procedures, imperative munging, etc." were clung to, in part because this was what was taught, and the more design-intensive but also more compact styles developed at Parc seemed very foreign. So when C++, Java, etc. came along the old styles were retained, and classes were relegated to creating abstract data types with getters and setters that could be munged from the outside.
Note that this is also "simulation style programming" but simulating data structures is a very weak approach to design for power and scaling.
I think the idea that all entities could be protected processes (and protected in both directions) that could be used as communicating modules for building systems got both missed and rejected.
Of course, much more can and should be done today more than 40 years after Parc. Massive scaling of every kind of resource requires even stronger systems designs, especially with regard to how resources can be found and offered.
Are you the Alan Kay. Is there any way we can verify this is you? The HN user account seems to have a very low "karma" rating, so one can't help but be more suspicious.
I'm the "computing Alan Kay" from the ARPA/Parc research community (there's a clarinettist, a judge, a wrestler, etc.) I did create a new account for these replies (I used my old ARPA login name).
It's really cool that you weigh in on discussions on HN. Or I suppose it feels like that to me primarily because I grew up reading your quotes in info text boxes in programming texts. And it's cool to have that person responding to comments.
It’s a new account created yesterday. Alan Kay did an AMA here a while back+ with the username “alankay1” and occasionally posted elsewhere. That account’s last post was 7 months ago. Given that user “Alan-1”s style and content is similar, it seems likely that he created a new account after half a year away from HN.
If you want verification, maybe you can convince him to do another AMA =) I’m still thinking about his more cryptic answers from the last one, which is well worth a read. I think that was before Dynamicland existed, but I may be off.
Well, how large is the pool of other possible candidates? Wouldn't someone from that time period (say the Simula folks, or another PARC employee) challenge that assertion? Why would he like?
Every source I've ever come across on this topic (and I work in PL research) points to Kay as the originator of the term "object-oriented" in relation to programming. No exceptions.
You are now making an affirmative assertion that Alan Kay did not coin the term. The burden of proof is on you, not him.
Link these sources, then! Even someone who recently interviewed him and researched the subject for months confessed he could never corroborate that claim.
You make the claim he coined the term, the burden of proof is on you.
Until you do, it's perfectly reasonable and intellectually honest to reject that claim.
Sure, it's impossible to corroborate at this point because there's no direct evidence of it. It's not like he wrote it in a mailing list that we still have access to. It was (according to what I've read about it) a verbal statement made in response to a question asked of him by someone else. I don't know who the other person is, though perhaps that would be a place to look.
References I've seen have, of course, essentially all pointed back to Kay's claims. I imagine this is insufficient in your eyes, so I won't bother finding them for you.
Arguing "it's reasonable and intellectually honest to reject [the claim that Kay coined the term]" is silly. It's not reasonable, because there's no real reason to suspect the claim to be false in the first place. For 50+ years it has been accepted knowledge that Kay coined the term. Nobody — including people with direct experience on the same teams or with otherwise opposing claims — has stepped forward to dispute this fact in all that time. This would be just like saying "Well I don't think da Vinci really made the Mona Lisa. I mean, all we have is his word for it. Sure, the painting didn't exist before him, and its existence appears to have started with him, and people at the time attribute its existence to him, but for all we know maybe somebody else did it and gave it to him to use as his own!" Sure, it's possible... but it's a silly claim to make (and hence not reasonable).
Your position is not "intellectually honest" because it sincerely looks like you're just trying to be antagonistic. What's the point in arguing that Kay didn't coin the term? Do you have some unsung hero in mind you'd like to promote as the coiner? Or do you just like arguing against commonly-held beliefs for the sake of it? I don't see what you're trying to accomplish.
Two more thoughts:
1. The only way to prove Kay didn't originally coin the term would be to find hard evidence of it used in a similar fashion (i.e., with regard to programming) from prior to 1966 (the time Kay claims he invented the term).
2. If you had such evidence, you would need to prove that Kay had seen it prior to his alleged coinage. In the absence of such proof, the existence of the term prior to Kay's use would be irrelevant. Why? Because the community as a whole has gone off of Kay's claim for the whole time. If somebody else conceived of "object-oriented programming", we didn't get it from them — we got it from Kay.
I'm a little skeptical. That user certainly writes in a similar style to how I've seen Alan Kay write online, but I wouldn't be opposed to seeing some more proof. A one-day-old HN account claiming to belong to one of the most important people in CS from the past 50 years seems a little suspicious haha.
An interesting and unfortunately true commentary on the lack of civilized behavior using technology that actually required a fair amount of effort -- and civilized behavior -- to invent in the first place.
Yeah, it's definitely disappointing that we have to worry about things like that, but that's the nature of the beast I guess. I hope you don't take any offense at my skepticism! For what it's worth, I'm happy assuming you're the real deal because being a cynic all the time is no fun and I have no specific reason to believe otherwise at the moment; I just also wouldn't be surprised to discover it's fake haha.
Also, I walk by your face a few times a week whenever I head into my lab. MEB has redecorated a few times over the years, but they always have a section of pictures of notable alumni and (of course) you're up there. Thanks for giving us a good name in the field and for all you've done!
Merrill Engineering Building! I'm glad it is still around. Those long hallways were used as a "display" to unroll the many pages of Simula machine code listings down one corridor so that three grad students -- including me -- could crawl over it and coordinate to try to understand just what Simula might actually be (the documentation in Norwegian that had been transliterated into English was not understandable).
Armstrong wrote the very famous "Why OO sucks" and then a decade or two later, changed his mind when he saw how successful OO was, and then tried to retrofit Erlang into an OO language. Not by changing Erlang, but by twisting the definition of OOP so that Erlang would fit it.
That isn't what happened at all (see the rebuttal by revvx). Joe was a great guy and also a great systems thinker. And he was the last person to worry about "bandwagons" (quite the opposite!)
Joe Armstrong was criticizing C++-style OOP when he wrote his critique.
After he learned more about Alan Kay's view on OOP, he decided that Erlang is closer to Alan Kay's OOP and he approves that specific flavor of OOP.
He didn't change his stance based on popularity. He changed his stance because in the 80s/90s the term "OOP" was synonymous with C++-style-OOP, but that changed in the 2000s thanks to 1) C++-style OOP criticism became commonplace in our industry (thanks to people like Joe Armstrong) and 2) an increase of popularity languages like Ruby and Objective-C (which are closer to Smalltalk) and even much-maligned concepts such as DCOM, SOA and CORBA.
He doesn't even mention C++ in his essay [1], but regardless, the C++ OOP is pretty much the mainstream OOP, which we still use today in Java, Kotlin, C#, etc...
And... no, the change in mindset about OOP never happened. Kay and Armstrong's view of OOP never took on. Today, OOP is still not seen as message passing and mostly seen as polymorphism, parametric typing, classes/traits/interfaces, and encapsulation. The complete opposite of what Erlang is.
I'm the one mentioning C++. To anyone familiar with both styles, Joe Armstrong is clearly not talking about Smalltalk-style OOP in his essay, he's talking about C++/Java/etc style. And later on he only praised Smalltalk-style OOP.
And sorry, by a "change in mindset in our industry regarding OOP" I mean that it became commonplace to criticize C++-style OOP. Not that everyone stopped programming in that style. Maybe there's a better way to phrase it?
"seen as" is the key here. "The masses" ultimately usually get to define terms, for good or bad. The gestalt or "feel" of what OOP "is" is often shaped by common languages and their common usage, again for good or bad.
It may be better to define specific flavors or aspects of OOP or OOP-ish things and discuss them in isolation with specific scenarios. That way the messy issue of canonical definition(s) doesn't come into play as often.
It would then be more of "hey, here's a cool feature of Language X or System X! Look what it can do...". Whether it's canonical or not is then moot.
Well, I disagree with 99% of this... I'm a guy that started with C, moved to functional programing, added C++, and now do all 3.
> Objection 1. Data structure and functions should not be bound together
Well, in my experience, in every almost every code-base (either from functional, or imperative programing), we end up with modules, witch are a set of function taking the same type as a parameter. This is very close to binding the functions and the types...
> Objection 2. Everything has to be an object.
I don't get the example. The only thing that this show, is the benefits of having a range type built in the language. Then it's just type aliases.
"There are no associated methods.", yes, but you will need functions to manipulate those types (just translate one type into another), at the end, it's going to a module, which is almost an object.
> Objection 3. In an OOPL data type definitions are spread out all over the place.
That's true. It also makes thinking about the data layout complex. That's why other paradigm have been developed (DOP), on top of OOP. Now you can also think that having those defined together makes dependency management easier.
> Objection 4. Objects have private state.
False. Objects can have a private state. This a problem with mutability, not oriented object programing. You can have non mutable OOP.
> Why was OO popular?
>> Reason 1. It was thought to be easy to learn.
The past 20 years have shown how easy it is. In fact, I actually think it's too easy, people rely too much on abstraction, without even trying to understand what's going on. I my opinion, it promotes a lazy mindset (This is my biggest criticism about OOP).
>> Reason 2. It was thought to make code reuse easier.
I would like an evidence that it's not.
>> Reason 3. It was hyped.
True, but that does not make it bad. People tried to hype every technologies... Some stayed, some went away.
>> Reason 4. It created a new software industry.
How has OOP created a software industry that would not have existed if functional programing had "won the fight"?
Upvoted because it's well-articulated, even though I disagree.
> Well, in my experience, in every almost every code-base (either from functional, or imperative programing), we end up with modules, witch are a set of function taking the same type as a parameter. This is very close to binding the functions and the types...
There is a key distinction: If I have two subsystems that use the same data in different ways, I can keep those concerns separate by putting the functions for each concern into a different module. Binding all the functions to the type mixes the concerns together and creates objects with way too much surface area.
Also, most OO langs make a big ceremony out of each new type: create the class file, create the test file, blah blah blah. I want types to be cheap so I can make them easily and capture more meaning with less work.
> Upvoted because it's well-articulated, even though I disagree.
Appreciate it :)
> There is a key distinction: If I have two subsystems that use the same data in different ways, I can keep those concerns separate by putting the functions for each concern into a different module. Binding all the functions to the type mixes the concerns together and creates objects with way too much surface area.
This is where composition helps. Now, historically, indeed OOP programmers have not been the best at using composition. Now, looking at more recent projects, this has got a lot better.
> Also, most OO langs make a big ceremony out of each new type: create the class file, create the test file, blah blah blah. I want types to be cheap so I can make them easily and capture more meaning with less work.
Totally agree with that, the ability to define a type in one line and have it reflected though the entire code base through type inference is the one thing that I miss the most in C/C++.
It does, though in my experience it leads you down a path that ends in some pretty strange names, as you nominalise more and more nebulous concepts, trying to verb in the kingdom of nouns.
Is that any different from foldl, foldr, reduce, map? If you have a generic data type you want your operators to be generic, regardless of whether they exist as methods or as separate functions. The only difference is that the object is free to not leak internal implementation details.
> > Also, most OO langs make a big ceremony out of each new type: create the class file, create the test file, blah blah blah. I want types to be cheap so I can make them easily and capture more meaning with less work.
> Totally agree with that, the ability to define a type in one line and have it reflected though the entire code base through type inference is the one thing that I miss the most in C/C++.
FWIW, I think that this is what distinguishes object-oriented programming as a language paradigm from object-oriented programming as a design paradigm: If you're going to say that all data types should have the operations you can perform on them bound up together into a single class (or class cluster), then that would imply that small, cheap data storage types are expected to be few in number.
If, OTOH, it's more about modularity, and you're not so concerned about how things happen on the sub-module level, then that gives more ideological space for code that's, for example, functional in the small scale and object-oriented in the large scale, like Erlang. Or procedural in the small scale and object-oriented in the large scale, like some C++ code.
I pretty much agree with your statements, but I'd like to take a stab at:
>> Reason 2. It was thought to make code reuse easier.
> I would like an evidence that it's not.
Mainstream OOP approaches achieve better cohesion by coupling data structures to functions. In the worst case you end up with essentially modules that contain "global variables" local to that module. In other words the only reason to have your instance variables is to remove the need to pass those variable to the functions as parameters.
This hurts the ability to write generic code. In fact you see this problem all the time in OO code. You have a base class and a bunch of basically unrelated child classes. It's not so much that the child ISA base, it's more that the child ACTS_AS_A base. But then, you run into all sorts of problems because one child (because it is using very different data structures) requires specialised code.
There are ways of getting around this, but often those ways end up encouraging you to implement an alphabet soup of design patterns that interact with each other -- causing more coupling rather than less. All for the want of a generic function.
IMHO OO is actually a poor vehicle for achieving code reuse. In fact, aiming towards this goal is usually one of the root causes I find in really poor OO designs. What OO is really good at is separating concerns and building highly cohesive code. This sometimes comes at the cost of increased coupling which inherently reduces reusability. I don't actually think that's a bad thing when used appropriately, but the old school "OO creates reusable code" is just a bad idea IMHO. It's the kind of thing that several of us threw out the window in the 90's along with large inheritance hierarchies -- nice idea, but didn't work out in practice.
> Objects can have a private state. This is a problem with mutability, not oriented object programing. You can have non mutable OOP.
Wouldn't this violate the "encapsulation" pillar of OOP? As far as I know, it's always taught with encapsulation, inheritance, polymorphism being its three pillars.
> How has OOP created a software industry that would not have existed if functional programing had "won the fight"?
I'm not sure functional programming has lost yet. I haven't worked with it personally, and so can't speak to its merits or demerits, but have heard a lot of buzz around it recently. As you said, people tend to hype everything; some stay and some go. It might be the next big thing in programming, or it might be hipster tech. Or, like most things, it might have some good applications, but not be applicable to everything. That's basically my argument for OOP.
> Wouldn't this violate the "encapsulation" pillar of OOP? As far as I know, it's always taught with encapsulation, inheritance, polymorphism being its three pillars.
Encapsulation is "if you have a state, you should encapsulate it". It does not ask you to have a state (even less a mutable one). I quite often use object to represent a logical piece of code, without any attributes.
> I'm not sure functional programming has lost yet. I haven't worked with it personally, and so can't speak to its merits or demerits, but have heard a lot of buzz around it recently.
As much as i really enjoy FP, I don't think it has more than 1% of the market share of software engineering. And I've been hearing the "heard a lot of buzz around it recently" for more than 10 years.
The question isn't if OOP or FP will win, but what mix of both is best. A lot of old OOP languages have added features that move them more towards FP. Out of the top of my head C# got E.g. extension methods, lambdas and many ways to be less mutable or pass multiple values around. The bit of programming history i was allowed to experience most definitively became more functional.
This is exactly it. The question is not "which will take over the world". Both can contribute useful features in various situations, and this is why I like languages that let me use what is best in each situation.
> Objects can have a private state. This a problem with mutability, not oriented object programming.
That seems to be the crux of it - I remember reading a post by Paul Graham where he said something similar about object oriented programming. His core thesis seemed to be that functional programming does a better job of organizing things than object-oriented programming does, and once you have the core of functional programming in place (closures, first-class functions, whatever the hell a "monad" is), you don't need object orientation any more. I've never gotten deep enough into pure functional programming to really see things this way, but I've gotten deep enough to at least understand why these pure FP guys might think that.
>Well, in my experience, in every almost every code-base (either from functional, or imperative programing), we end up with modules, witch are a set of function taking the same type as a parameter. This is very close to binding the functions and the types...
And the world is not so neatly divided between things that just are (data structures) and thing that do things (functions). Take a date for example. The fact that it is a Wednesday is a "just is" sort of thing but is typically implemented as a function.
I'm not sure I'd agree that it being a Wednesday is a "just is" sort of thing. The point in time is a data point (on the time axis, if you will). A function then needs to place it in a calendar.
FWIW I'm struggling to come up with a good example of where the line between data and functions is clearcut, except perhaps when the data describes a function: an SQL string, some code that'll get eval'ed, etc.
>The point in time is a data point (on the time axis, if you will)
You need some way to place in on that axis, though. Commonly we use Day, Month, and Year to do so. But we could also define a date as the seventh Wednesday in 2019. Or as an integer relative to Jan 1 1970.
I still think OO provides a pretty easy mental framework for programming. You can get good results. Bit of discipline without going crazy and it works really effectively. Despite its shortcomings.
I don't think what I'm about to say is necessarily inherently true, but it reflects how things seem to work in practice:
It seems to me that part of the problem is that OO doesn't force you to have discipline and/or without constant vigilance (which product owners are never willing to schedule for) the system inevitably gets out of control over time.
On the other hand, it seems to me that the core principles of functional programming (immutability, functional purity, construction via composition, and declarative programming) serve as a check that prevents things from getting out of control.
That being said, I think it's worth considering that all of the "core principles" of FP that I mentioned could be incorporated into OO. It also seems like FP can be more prone to out of control syntax (e.g. unwise use of point free style).
Forces discipline is a double edged sword. It can help you keep things clean and understandable, but it can also limit things. For example: I am pretty good with react and redux, but I was more productive with JQuery. JQuery doesn’t have the forced discipline of react+redux, but it gets the job done. At the same time, I’ve seen larger amounts of crap JQuery code than React+Redux.
The real issue: discipline has to be learned, often by experience, it isn’t forced. If you force it, then no one knows why things are they way they are.
> If you force it, then no one knows why things are they way they are.
It's more accurate to say "if you don't teach why' then no one knows why". Forcing or not has nothing to do with it.
I would argue "forcing" is strictly better, since learning discipline requires experience in doing things in every other wrong way. While that's great for learning on your own, I wouldn't want developers to "learn discipline" like this in production.
I wish more languages would let you do stuff like mark reference parameters to methods as unable to be changed or reassigned within the method, get a readonly reference to a list without having to make a copy, that sort of thing. It doesn't have to be forced, just give me the option so I can get a guarantee on something if I want to.
If you mark all fields and variables in Java as final, you get pretty much this experience?
If I could go back in time and unilaterally make one change to Java, it would be to make `final` default. But if you just get in the habit of using it (the IDE helps), non-final variables look broken. And once objects have all-final fields, immutability just starts spreading upward in your code.
I’m a bit surprised they haven’t copied Scala’s case class, where you get immutable fields by default and helpers to copy with updated fields, along with a sane hash and equality implementation. Making immutability easier than the alternative makes a huge difference in practice.
final List<?> immutable = Collections.immutableList(x)...
and pass it around.
Never expose your collections directly (unless you are willing to go copy on write and immutable objects inside the collections, the latter is very welcome, though)
Rust is immutable by default, optionally mut, and you lend different kinds of references & (immutable-- can have many) &mut (mutable reference-- must be unique)
Just my personal anecdote. The only functional programming languages I have extensive experience with are C and JS. I have NEVER seen a sensibly organized or maintained medium to large sized C or JS application. Every time it's been total chaos. In Java projects, there's a 50/50 shot of it being moderately sensible. I'm confident other people have completely different experience. Based on your post it sounds like you have had a different experience, and I have no trouble believing that. Anyways, just my $0.02.
In this context, C and JavaScript would not be considered functional. They have functions, but that's not what most people mean by "functional". While it's possible to restrict yourself to a functional subset in both of them, they would typically fall into the "imperative" category.
Imperative programming languages (like C and JavaScript) don't generally impose any discipline on the users.
> Imperative programming languages (like C and JavaScript) don't generally impose any discipline on the users.
I think this view disrespects history a little. C was one of the earliest languages to be conceived upon a foundation of structured programming principles, i.e. block structure, sequence/selection/repetition, subroutining. (Okay the language still has goto, hopefully we can agree to not make a big deal out of that.) The kind of discipline proposed by structured programming was far from universally well-received at the time, and I think it's fair to say that it lead to huge improvements in the quality of codebases everywhere, and is one of the great successes of programming language thinking of the '70s.
C is also statically typed. It's obviously easy to blow all sorts of holes in C's type system, but if you'd go so far as to say that C's types impose no discipline at all, I'd ask you to try teaching C to a room full of compsci students who have been raised on something like Python.
I love C, but I'm hard pressed to think of any language other than assembly which is more willing to get out your way if you so much as nudge in an undisciplined direction. C certainly does not impose much discipline on programmers, but it allows them to bring some if they walk the line.
> Okay the language still has goto, hopefully we can agree to not make a big deal out of that.
> It's obviously easy to blow all sorts of holes in C's type system
Yes, I'm willing to politely ignore evidence which refutes your point. I mean you were kind enough to make my case for me. :-)
> I'd ask you to try teaching C to a room full of compsci students who have been raised on something like Python.
Dynamic vs static typing seems orthogonal to what we're talking about here, but maybe I'd have to think about that more. Python just waits to catch your type errors until runtime. Comparing that to JavaScript in a browser which silently ignores your errors (or happily performs crazy type conversions), Python seems disciplined in comparison.
I think this might be a little unfair to JavaScript. No doubt it's a multi-paradigm language, but there's a huuuge FP community in JS, and loads of code bases (esp. in React) are written in a very functional style. Definitely not comparable to passing function pointers around in C.
Almost everyone I've come across who writes JS in a "functional style" has done it because they've had prior experience using a FP language and are applying those lessons to JS. I believe the OP is talking about those FP languages, such as Haskell and Clojure, which have a very different programming experiences.
While JS supports it in many ways, it's still not a style inherent in a multi-paradigm language like Javascript nor is it (really) the primary style in popular frameworks - despite the fact inspiration from FP languages/libraries has been increasingly common in popular frontend frameworks.
Additionally, even if you go full-bore FP on JS, it's still not the same. Almost no one goes full-bore FP in JS because it really doesn't make sense to nor is it an easy thing to do.
You can get maybe 80% of the way there, but non-FP dependencies can still hurt you. That's also a problem with clojure dependencies on Java libraries, but less so because at least the clojure ecosystem mostly buys into the FP paradigm.
Definitely don't disagree with anything you say, just don't think the characterization of JS as a strictly imperative language is fair -- especially when compared with C.
FP is different things to different people. For me, it's pure functions and a preference for purely functional data scructures (Okasaki style).
Also to me, FP has an emphasis on avoiding mutating state. The second example on the React front page shows how to mutate state, and then they just build from there. I don't use React, but looking at those examples, it all looks very OO to me.
If we are talking languages of certain paradigms imposing discipline then JS not being a language conceived or further developed with FP in mind is not one of those, regardless of how folks use it.
I think you're confusing procedural with functional. Modern Javascript has some support for functional idioms, but C is as far from functional programming as you get (nothing wrong with that, of course)
Agreed. I remember thinking "what don't I get? Why do we need getters and setters?". After some years (and discovering Python), I realized there's nothing to get, it's just ridiculous overengineering 95% of the time. Same goes for a lot of stuff in OO. I attribute it to the corporate mindset it seems to thrive in, but I could be wrong.
The important thing is restricting your public interface, hiding
implementation details, and thinking about how easy your code (and code that
uses it) will be to change later. It's not an OO vs anything thing.
When you want a value from a module/object/function/whatever, whether or not
it's fetched from a location in memory is an implementation detail. Java and
co provide a short syntax for exposing that implementation detail. Python
doesn't: o.x does not necessarily mean accessing an x slot, and you aren't locking yourself into any implementation by exposing that interface as the way to get that value. It's more
complicated than Java or whatever, here, but it hides that complexity behind
a nice syntax that encourages you to do the right thing.
Some languages provide short syntax for something you shouldn't do and
make you write things by hand that could be easily generated in the common
case. Reducing coupling is still a good idea.
> The important thing is restricting your public interface
That is the important thing sometimes. At other times the important thing is to provide a flexible, fluent public interface that can be used in ways you didn't intend.
It really depends on what you're building and what properties of a codebase are most valuable to you. Encapsulation always comes at a cost. The current swing back towards strong typing and "bondage and discipline" languages tends to forget this in favour of it's benefits.
It scares you because you're making some assumptions:
1. You assume that I'm writing software that I expect to use for a long period of time.
2. Even if I plan to use my software for an extended period of time, you're assuming that I want future updates from you.
Let me give you an example of my present experience where neither of these things are true. I'm writing some code to create visual effects based on an API provided by a 3rd party. Potentially - once I render the effects (or for interactive applications - once I create a build) my software has done it's job. Even if I want to archive the code for future reuse - I can pin it to a specific version of the API. I don't care if future changes cause breakage.
And going even further - if neither of these conditions apply the worst that happens is that I have to update my code. That's a much less onerous outcome than "I couldn't do what I wanted in the first place because the API had the smallest possible surface area".
I'll happily trade future breakage in return for power and flexibility right now.
Maybe instead of "restrict" it would be better to say "be cognizant of." If you want to expose a get/set interface, that's fine, but doing it with a public property in Java additionally says "and it's stored in this slot, and it always will be, and it will never do anything else, ever." I don't see what value that gives in making easy changes for anyone. I don't see why that additional declaration should be the default in a language.
You get into the same issue with eg making your interface be that you return an array, instead of a higher-level sequence abstraction like "something that responds to #each". By keeping a minimal interface that clearly expresses your intent, you can easily hook into modules specialised on providing functionality around that intent, and get power and flexibility right now in a way that doesn't hamstring you later. Other code can use that broad interface with your minimal implementation. Think about what you actually mean by the code you write, and try to be aware when you write code that says more than that.
I think it's interesting that you associate that interface-conscious viewpoint with bondage and discipline languages. I mostly think of it in terms of Lisp and Python and languages like that where interfaces are mostly conceptual and access control is mostly conventional. If anything, I think stricter type systems let you be more lax with exposing implementations. In a highly dynamic language, you don't have that guard rail protecting you from changing implementations falling out of sync with interfaces they used to provide, so writing good interfaces and being aware of what implementation details you're exposing becomes even more crucial to writing maintainable code, even if you don't have clients you care about breaking.
Of course all this stuff goes out the window if you're planning to ditch the codebase in a week.
Those aren’t abstractions... Also, I’m not arguing that you can’t contrive an abstraction around a getter, I’m arguing that it’s useful to do so (so please spare me contrived examples!).
You're always using a getter. It's just a question of what syntax your language provides for different ways of getting values, and how much they say about your implementation.
Most people don't have a problem with getters and setters, they have a problem with writing pure boilerplate by hand. Languages like Python and Lisp save you from the boilerplate and don't provide a nicer syntax for the implementation-exposing way, so people don't generally complain about getters and setters in those languages, only in Java and C++ and things.
I think we're coming at it from different angles. My point is that there shouldn't be any abstraction to write, and it should just be the way the language works. Primitive slot access in Java is not just a get/set interface, it's a get/set interface that also specifies implementation characteristics and what the code will be capable of in the future. It should be in the language so that you can have primitive slots, but it shouldn't be part of the interface you expose for your own modules, because adding pointless coupling to your code does nothing but restrict future changes. Languages should not provide an easy shortcut for writing interfaces like that.
I don't view it as a useless abstraction, because I view it as the natural way of things. I view specifying that your get/set implementation is and always will be implemented as slot access to be uselessly sharing implementation details that does nothing but freeze your current implementation strategy.
I think a better question is when that abstraction gets in your way. When does it bother you that nullary functions aren't reading memory locations? Why do you feel that's an essential thing to specify in your public interface, as a default? There's nothing stopping you from writing code in Python and mentally modelling o.x as slot access, because it follows the interface you want from it.
If you only care because it's something extra you have to do, then that's what I meant by boilerplate. I think it's a misfeature of Java's that it presents a model where that's something extra you have to do.
> My point is that there shouldn't be any abstraction to write, and it should just be the way the language works.
I understand your point, but I think you misunderstand what "abstraction" means. "abstraction" doesn't mean "function" (although functions are frequently used to build abstractions), and if you have "dynamic properties" (or whatever you'd like to call them) a la Python, then you're still abstracting. My point is that abstracting over property access (regardless of property-vs-function syntax) is not useful, or rather, I'm skeptical that it's useful.
> I think a better question is when that abstraction gets in your way. When does it bother you that nullary functions aren't reading memory locations? Why do you feel that's an essential thing to specify in your public interface, as a default? There's nothing stopping you from writing code in Python and mentally modelling o.x as slot access, because it follows the interface you want from it.
I think this is a good question, because it illustrates a philosophical difference--if I understand your position correctly, you'd prefer to be as abstract as possible until it's problematic; I prefer to be as concrete as possible until abstraction is necessary. There's a lot of mathematical elegance in your position, and when I'm programming for fun I sometimes try to be maximally abstract; however, when I'm building something and _working with people_, experience and conventional wisdom tells me that I should be as concrete and flat-footed as possible (needless abstraction only makes it harder to understand).
To answer your question, that abstraction gets in your way all the time. The performance difference between a memory access (especially a cache-hit) and an HTTP request is several orders of magnitude. If you're doing that property access in a tight loop, you're wasting time on human-perceivable timescales. While you can "just be aware that any given property access could incur a network call", that really sucks for developers, and I see them miss this all the time (I work in a Python shop). We moved away from this kind of "smart object" pattern in our latest product, and I think everyone would agree that our code is much cleaner as a result (obviously this is subjective).
TL;DR: It's useful to have semantics for "this is a memory access", but that's unrelated to my original point :)
It's frustrating to read this thread and your comment kind of crystallized this for me so I'll respond to you.
Using an array without having to (manually) calculate the size of the objects contained within is like the major triumph of OO. This is a getter that you almost certainly use constantly.
Please try to consider your statements and potential counter factuals before spraying nonsense into the void
> Using an array without having to (manually) calculate the size of the objects contained within is like the major triumph of OO.
Er, aside from C and ASM, few non-OO languages require that kind of manual effort. That's not a triumph of OO, it's a triumph of using just about any language that has an approach to memory management above the level of assembly.
> Please try to consider your statements and potential counter factuals before spraying nonsense into the void
My claim was that getter abstractions as described by the GP (abstracting over the “accessed from memory” implementation detail) are not useful. Why do you imagine that your array length example is a reasonable rebuttal?
Its not the length of the array. Its using things like array[20]. Yes that exists pre-OO and outside of OO, but its the foundational aspect of OO and one of the strongest use cases.
Sorry for the way I communicated- I was tired and should have reconsidered.
> Sorry for the way I communicated- I was tired and should have reconsidered.
No worries, it happens. :)
> Its not the length of the array. Its using things like array[20]. Yes that exists pre-OO and outside of OO, but its the foundational aspect of OO and one of the strongest use cases.
I'm not sure what you're getting at then. Indexing into an array? Are you making a more general point than arrays? I'm not following at all, I'm afraid.
I think my argument is basically that arrays are effectively object oriented abstractions in most languages.
You aren't responsible for maintaining any of the internal details, it just works like you want it to. My example was with the getter for the item at index 21 (since you had specifically called out useless getters), but equally well applies to inserting, deleting, capacity changes, etc.
> I think my argument is basically that arrays are effectively object oriented abstractions in most languages.
I think I see what you mean, although I think it's worth being precise here--arrays can be operated on via functions/methods. This isn't special to OO; you can do the same in C (the reason it's tedious in C is that it lacks generics, not because it lacks some OO feature) or Go or Rust or lisp.
These functions aren't even abstractions, but rather they're concrete implementations; however, they can implement abstractions as evidenced by Java's `ArrayList<T> implements List<T>`.
And to the extent that an abstract container item access is a "getter", you're right that it's a useful abstraction; however, I don't think that's what most people think of when they think of "getter" and it falls outside the intended scope of my original claim.
> Using an array without having to (manually) calculate the size of the objects contained within is like the major triumph of OO.
I've used arrays in countless OO and non-OO programming languages, and I do not recall ever having to manually calculate the size of objects contained therein – what are you talking about? Only C requires crap like that, but precisely because it doesn't have first class arrays.
I get that "what don't I get?" feeling all the time. Overengineering is basically an epidemic at this point, at least in the JS/front-end industry.
My guess is there's a correlation between overengineering and career success, which drives it. Simple, 'KISS' style code is the easiest to work with, but usually involves ditching less essential libraries and sticking more to standards, which looks crap on your resume. Most interviewers are more interested in whether you can piece together whatever stack they're using rather than whether you can implement a tough bit of logic and leave great documentation for it; so from a career perspective there's zero reason for me to go for a (relatively) simple 100 line solution to a tough problem when I can instead go for a library that solves 100 different use cases and has 10k lines of documentation that future devs have to wade through. The former might be the 'best' solution for maintainability but the latter will make me appear to be a better engineer, especially to non-technical people, of which there are far too many of on the average team.
Well, it depends on what you are doing. I designed some systems that were too complex and some that were too simple and couldn't grow as a result. So, with experience, one will hopefully see that supposed overengineering is sometimes only overengineering until you actually need that specific flexibility in a growing system. And there is little substitute for experience to know which is which.
In the original JavaBeans spec, getters and setters served two purposes:
1. By declaring a getter without a setter, you could make a field read-only.
2. A setter could trigger other side effects. Specifically, the JavaBeans spec allowed for an arbitrary number of listeners to register callbacks that trigger whenever a value gets changed.
Of course, nobody actually understood or correctly implemented all this, and it all got cargo culted to hell.
Finally someone mentions using getters to create read only fields. Objects are the owners and guardians of their own state. I don't see how this is possible without having (some) state-related fields that only can be read from the outside.
IME the thing with getters and setters is that everyone is doing it (inertia) and that other options either suck (syntactically) or break the "everything is a class" constraint.
Ruby is far from being my favorite language, but I like how Structs "solve" the getter/setter problem in it:
my_struct = Struct.new(:field_one, :field_two)
It doesn't clutter your code with multiple lines of boilerplate, and it returns a regular class for you to use, not breaking the "everything is a class" constraint.
In my opinion OO design is valuable in extremely large code bases and/or code bases that will likely exist for decades and go through multiple generations of significant refactoring.
With respect to your setters and getters question, particularly in regards to Python... The @Property feature in Python is just a specific implementation of the setters/getters OO design principle. I can easily be convinced typing foo.x is better than foo.getX(), but I have a hard time having a strong emotional reaction to one vs the other if the language allows them to have the same benefits.
Yeah, somewhere it stopped being about modeling your problem and it became a code organization technique. There was an incredible effort to formalize different modeling techniques/languages but it’s dried up.
It seems to be what we do, I’d say fp is in the same place. My CS program was heavily built around the ML family of languages, specifically Standard ML, with the algebraic types, functions, pattern matching (on your types,) etc. it seems like that “functional programming” is a radically different thing than what people do in js or erlang and call it that. It all comes around, I guess, static types were pretty gauche 10-15 years back and now how many folks are using typescript to make their js better?
Either you tell your objects what to do, which means they have mutable state, which means you are programming in an imperative way.
Or you get values from your objects. You need getters for this, but you can guarantee immutability and apply functional programming principles to your code.
You can't have your cake and eat it too. At the end of the day, you need values.
It's harder to write simple code because that requires a crystallized understanding of the problem. You can start banging out FactoryManagerFactories without having the faintest idea of the core problem at hand. Maybe some of the silliest OO patterns are like finger warmup for coders? Unfortunately that stuff still ends up sticking to the codebase.
That sounds wise but it doesn't really mean anything. There are things that suck about a Ford Pinto and there are things that suck about about a Tesla Model S, but saying that they both have their downsides is technically true while obscuring the fact that the Tesla is a muuuuuuuuuuuuuuch better car.
False equivalence - even if they are similar in one respect doesn't erase the other differences. Saying both a bicycle and a truck can move things on roads and can kill you if run over by one are technically true but misses many other larger differences.
Admittedly I'm somewhat of a FP fanboy, but I seriously cannot disagree with you more on this.
Functional Programming (and Logic Programming) are better than other paradigms because, unlike Java (or C++, or C#...) there is an emphasis on correctness, and the people working on FP compilers (like Haskell and Idris) are utilizing mathematics to do this.
No idea on your opinion on mathematics, but to me Math/Logic reign supreme; the more mathematically-bound your program is, the less likely it is to do something you don't want later.
Compare this to Java. It's 2019, and we're still doing `if (x==null) return null` all over the place (I'm aware that the option type exists but that doesn't really help when it's not enforced and none of my coworkers use it). How about having to create six different files for something that I could have written in 20 lines in Haskell? Or how about the fact that the type system exists to help with optimizations, and due to a lack of support for structured typing, it can only be useful for that.
I realize that I'm picking on Java, but Java is the biggest target when it comes with OOP as the industry understands it. I personally cannot stand having to create fifty files do to something like a database wrapper, and in Java that's effectively the only way to program.
> I realize that I'm picking on Java, but Java is the biggest target when it comes with OOP as the industry understands it. I personally cannot stand having to create fifty files do to something like a database wrapper, and in Java that's effectively the only way to program.
I had this experience once in a Rails shop.
A simple database table mapped to a CRUD API endpoint would take from five to ten files. That amounted to about 500 lines, plus a lot of tests for each class.
I never really understood why programming became so verbose. In an ideal world I'd have a declarative API that mapped the table to the API for me automatically. In a realistic timeline I'd just use the traditional Rails approach and be happy. But the people working there preferred to use complicated patterns and a lot of boilerplate before they were needed, even though the project was perpetually late and riddled with bugs. I wish we could give a chance to simpler ways of solving problems.
Yeah, I had similar issues with Rails as well; I feel people can be a bit too liberal with creating files, but I personally follow this mantra when I do it: does the benefit of separation worth the obfuscation introduced by adding a partition? Sometimes it is, and then I make a new file.
This is a bit of shameless self-promotion, but I've actually written a framework that's MVC-ish that lets you create really declarative APIs. The first version is written in NodeJS that I actually deployed in production [1], and I have an Erlang port that's semi-complete that I've recently started hacking on again [2], with the whole crux of it that you should be able to simply declare the composition of your actions.
We are currently in a swing back in favour of statically typed languages. People seem to have forgotten why we previously had a huge trend towards more expressive, less strict, dynamically typed languages.
Maybe we learn something each time the pendulum swings but as someone knee deep in C# at the moment, the quality of the APIs I have to deal with are far below those I was used to in Python (at least in terms of elegance and usability).
I'm not sure whether these flaws are inherent or whether it's possible to have one's cake and eat it.
I don't think it has anything to do with dynamic or static typing, but more to do with teams, libraries and program design.
I've had terrible experiences with complexity and verbosity in Ruby and Python codebases, which are dynamically typed. On the other hand, I worked with super expressive and simple to work codebases in C# and Haskell. And I had the opposite experience as well in other times.
It is absolutely possible to have the cake and eat it in this regard.
In fact I'd consider Haskell way more expressive than any dynamic language I ever worked with.
I love Haskell, and I agree that it's expressive, but any language with a nominal type system like Haskell is inherently going to be less expressive than a dynamic language.
Compare these functions, one in JS and one in Haskell:
function F (x) {
var first = x.first;
var second = x.second;
return first + second;
}
vs.
F :: (HasX a, HasY a) => a -> a
F foo = (x foo) + (y foo)
(I'm a little outta practice with both langauges, but my point will still stand)
With the JS version, F can take in any expression that has the properties of `x` and `y`, while with the Haskell version, the type has to implement the typeclasses `HasX` and `HasY`. While the Haskell version is still better than something like Java because you can implement a typeclass without modifying the core datatype, it's still inherently less expressive.
I'm not saying that it's not worth it (cuz Haskell is awesome for everything but records), but it's still less immediately reusable.
Step 1 is to use a static analyzer. Enforce null checks and finals and such. I think cresting a lot of files only hurts up front but I will give up more keystrokes in favor of unequivocal stack traces any day. Also javadoc is the best.
This sounds like the root of your problem. If you want to do FP, and none of your coworkers want to do FP, then your problem isn't really the language.
If you're in a Java shop, maybe start by evangelizing FP rather than a totally different language/platform? It's possible to do FP in Java, and (IMO) it ends up pretty reasonable. But it's not the default habit for J Random Java Programmer, so they need training.
Sure, but as John Carmack has said, if the compiler allows something, it will end up in the codebase once it gets sufficiently large.
I work for a brand-name big company (I won't mention it here but I'll tell you if you email me) that hires incredibly talented engineers that are a lot smarter than me. The codebase I work on is around ~20 million lines of Java, and I've seen stuff in there that is so incredibly gross that a compsci professor would write "see me after class" if you submitted it.
Example: I once saw a piece of code doing this:
do {
// doing stuff
} while (false)
It took me about 10 minutes of digging into the code to realize that the person who wrote this was doing this so that they could add a `break` in there as sort of a makeshift `goto` so they could early-exit and skip all the rest of the stuff in the block. Needless to say, I was horrified.
Why is it that incredibly talented engineers are writing awful code like that? It's certainly not incompetence; what almost certainly happened was that there was some kind of time-crunch, and the dev (understandably) felt the need to cheat. This is a direct consequence of the compiler allowing a bad design.
Functional programming has no relevance to the correctness of code.
You'll notice that it's virtually unheard of to use any FP languages in critical software. Instead they use languages that lend themselves well to code reviews, static and dynamic analysis, model-based design and proofs, etc. Like C, Ada and some domain-specific stuff.
The kind of "correctness in the small" offered by Haskel through its type system can be obtain also in languages like C++, Swift and others. With the additional benefit of massive market share, teaching resources, mature tooling and so on.
>You'll notice that it's virtually unheard of to use any FP languages in critical software.
Erlang powers around 40% of the world's phone networks; and if it's not mission-critical I'm not entirely sure what is.
For that matter, Whatsapp is also written in Erlang and Jane Street does trading applications in OCaml. Without making a judgement on whether or not they should, both Whatsapp and Jane Street create very large apps and have created successful businesses with FP.
> Instead they use languages that lend themselves well to code reviews, static and dynamic analysis, model-based design and proofs, etc
I can't tell if you're being serious; are you suggesting that Functional Programming doesn't lend itself to proofs? Really? Have you ever heard of Coq or Idris or Agda? They literally have modes to prove the correctness of your code.
What about functional programming doesn't lend itself to code reviews? I did F# for a living for two years and we had regular code reviews. I also used the .NET performance profiling tools which worked fine for F#.
> The kind of "correctness in the small" offered by Haskel through its type system can be obtain also in languages like C++, Swift and others.
Uh, no. Sorry, that's just flatly wrong.
Yes, static analysis tools are awesome, but you will never get the same level of compile-time safety from C++ that you will from Haskell or Rust or any number of functional languages. The type systems offer very little information, making it impossible for the compiler to shield anything.
TLA+, SPARK, Frama-C, PROMELA, Astree, even plain C or C++ and a heavily safety-oriented process are used when correctness is important more than any of the FP languages you've mentioned.
In fact, by mentioning Erlang and Ericsson, you exhausted the only case supporting your point. Maybe if you tried hard, you could come up with a couple more. Now let's do the same exercise for the languages and tools I enumerated and it will take a long time until one runs out of examples.
WhatsApp is another perennial example in these discussions. I can accept it although there's nothing critical about a chat app - and once again it's rather an exception instead of the rule. Most chat applications are written in "not FP" programming languages and work just as reliably as WhatsApp.
In case it's not yet clear from the above, I believe that only tools which are used heavily in the industry deserve our attention, not obscure languages which haven't been put to the test and one off projects. The oldest trick in the FP argument book is finding some minor FP language to match any requirements put together by critics. So yes, I've heard of Idris and Agda - on HN - because barely anyone else uses them or talks about them. Coq is perhaps the outlier, because it was used to verify CompCert, but then again CompCert itself is used to implement a lot more things.
But Coq, Idris, Agda and so on are actually red herrings, because when people praise FP's correctness benefits, they refer to standard languages like Haskell, F# or OCaml for which there is in fact little proof that they have a significant effect on program correctness. Obsessively encoding information in the type system will reduce or eliminate some types of errors, but that's far from proving a program correct and really not that far at all from what's available in other standard, mainstream languages, for less effort, better support and a great ecosystem.
I don’t know anything about SPARK or FramaC, but TLA+ isn’t a programming language, and you can use it to model distributed functional apps just fine (I still do).
Even if the Erlang/Ericsson stuff is the “only case” (It’s not) I do not see how that makes my point less valid; Erlang was specifically design for systems that cannot fail. Telephones are just a good example of that.
And doesn't your TLA+ model make your functional code significantly more reliable? Guess what, it does the same thing for OO languages => no need to use FP to increase reliability.
Same goes for the other tools or languages I mentioned.
Having proper support for option or sum types is an orthogonal question to if it is object oriented or not. Crystal is an OO language that have sum types, for example (and yes, nil is separated from other types, so a method returning a Duck will really do that, and it won't return nil unless the signature would be Duck | Nil).
Scala failed because it's the opposite of pragmatic. If you're looking for pragmatic, take a look at Kotlin.
As for Scala 3, it's still years away, if it ever comes out. And when it does, there's little reason to think its goals will be different from what Scala 2 was (an academic language) since it's the same team as Scala 2 writing it.
It's a language that's aimed more at research, producing papers for conferences, and financing the EPFL and its PhD students than at users in the real world.
There's absolutely nothing wrong with that, by the way, I love studying all the advanced concepts that Scala has pioneered over the years.
But it's also the reason why it's largely in decline and why Kotlin has taken the industrial world by storm: because it is a pragmatic language.
I've used Scala in production environments, and we never had any problems with it being too academic. SBT sucks, but that's another issue.
Kotlin doesn't have typeclasses (something you get as a side effect of Scala implicits), ADTs, or true pattern matching (along with exhaustivity checks). In combination, all of those allow for expressive, easy-to-read code that, in my experience, tends to have few bugs. Kotlin is a step backwards from that. It's still a significant step up from Java, however.
F# is the only other language I've used that I've found comes close. However it lacks typeclasses, and the large Java ecosystem.
A step backward to you is a step forward in pragmatism for the rest of the world.
I understand the value of higher kinds and I'm comfortable with Haskell, but it's pretty obvious to me why Kotlin is succeeding where Scala failed.
Sometimes, improvements in programming languages are reached by having fewer features, but Scala is a kitchen sink that was always unable to turn down features, just because their implementation would lead to more research papers to submit to conferences.
As a result, we ended up with a monster language that contains every single feature even invented under the sun.
AFAIK Scala is much more popular than Kotlin in terms of job postings and projects in big enterprises. My data limited to some Fortune 100 companies tell me it is on par with Python in popularity. Spark, Kafka, Flink, Finangle are written mostly in Scala. Pretty impressive for an academic, non-pragmatic language that has failed, isn't it?
So can you elaborate what do you mean by "failed"? Because it seems you are using a different definition of it.
"Sometimes, improvements in programming languages are reached by having fewer features, but Scala is a kitchen sink that was always unable to turn down features, just because their implementation would lead to more research papers to submit to conferences."
That's some different language you're talking about. Scala is built on a small set of very powerful, general, orthogonal features which cooperate nicely and allow to build most of the stuff as libraries. Its design is much more principled than Kotlin's. Kotlin has special features built into the language, that Scala needs just a library for.
Kotlin is just Scala with a few most advanced features taken out with no good replacement. It is not even really much faster in compilation speed when you account for its verbosity [1] and it has worse IDE support limited to just one IDE. Jetbrains is not interested in supporting other IDEs than its own. So what is so much more pragmatic about it?
Also there is no decline in Scala usage, and Kotlin doesn't exist outside its Android niche really. So "taking by storm" is a wishful thinking.
Sure, Scala has some neat features too (though I'm not 100% sold on the language).
That said, and I addressed this specifically, when I say "OOP" in the software world, people typically think of Java, C++, or C#, and those are what I'm addressing specifically.
I suppose in the most technical sense of the word, you could argue that Erlang is OOP at some level, and Erlang is awesome, so if we want to play with definitions then sure, I'll concede that OOP is good, but until the industry as a whole agrees on these terms, and doesn't treat OOP as a synonym for "Java/C++/C#", I'm still going to say that I hate OOP.
It's easy to hate on OO because of something along the lines of it not being a neat mathematical formalism, which can facilely be argued as strictly a deficiency: if you don't look too closely, it certainly appears as only a deficiency.
I think a deeper look inevitably runs into two things:
(1) certain domains are more easily approached through spare mathematical formalisms than others. E.g. if the domain you're modeling is already most easily thought about in terms of compositions of mathematical transformations, you should probably model it functionally.
(2) Finding a declarative characterization of the results you'd like, or a neat chain of functional compositions which produce it, typically takes more work up front. (For many projects, the initial work up front is worth it—but for lots and lots of others, it's essentially over engineering.)
OO is often not 'ideal,' but frequently, solidly pragmatic.
As a paradigm, the aesthetic behind it reminds me of TypeScript's designers intentionally foregoing soundness of the type system.
OO languages work effectively in spite of OO features. Sounds like a hot take, but throw away inheritance altogether (or use it to automatically delegate to some component, like a dodgier version of struct embedding in Go), use interfaces if the language doesn’t support first class functions, etc and you’ll be effective, which is to say, write it like you would write Go or Rust or similar.
> I still think OO provides a pretty easy mental framework for programming. You can get good results.
The problem is that OOP is a slate of something like 18 characteristics and no language ever picks the same choices.
That having been said, the big problem with (especially early) OOP is that "Is"/"IsA" (aka structural inheritance) is the primary abstraction. Unfortunately, "Is"/"IsA" is a particularly lousy choice--practically anything ("Contains" or "Accesses" or ...) is better.
Most of the modern languages designed in the past 10-20 years reflect this--"Traits"/"Interfaces" seems to be what everybody has settled around.
I think OO can work because in many problems we only focus on one thing at a time. If multiple objects with equal complexity/importance are involved, OO can get sticky (e.g. which object should invoke a method, etc). I think Joe's article is intentionally provocative to make a point, but I'd like to see more discussions about when and why OO doesn't work well sometimes and what the course of actions we should take.
I've upvoted you because you spotlighted a very important issue. In OOP we are supposed to think of a program as little pseudo-isolated programs that somehow work together to fulfill the technical requirements.
This model works where it actually represents the real world: Mostly, in distributed systems.
In other areas, it just leads to overengineered piecemeal crap that is incredibly hard to understand. Where you can get control over what happens, you absolutely should get it. Don't act like your program was a thousand little independent components that have their own mind and lifes. Because it isn't like that, and if it was, there was no way you could actually get in control of these to make them produce a very specific outcome.
So the only reason why many OOP programs sort-of work is because programmers never actually respect the abstractions that they set up by defining so many classes and methods. To get the program to work, one needs to know very precisely what each class does in each cases. In the end OOP is just a terrible farce since there is no rhyme and reason for all these classes. It's needless bureaucracy, and prevents us from structuring programs in a more efficient and maintainable way.
The basic situation is this. We often have a situation in which N operations contain M cases (for M different types).
Without OOP, we have the ugly organization of writing N functions that each dispatch M cases of code by pattern matching or switching on a numeric type field or whatever.
OOP lets us break these pieces of logic into separate methods. And then in the physical organizationof the program, we can group those methods by type. For any given type, we have N methods: each one implements one of those N functions just for that type.
This is a better organization because if we change some aspect of a type, all the changes are done in one place: the implementation file or section of file for that type. Or if a new type is added, we just add N methods in a new file or section; we don't have to change the code of numerous functions to introduce new cases into them.
Those who write articles opposing OOP never seem to constructively propose an attractive alternative for this situation.
It is this attractive program organization which swayed developers toward OOP, or even full blown OOP evangelism. It's not because it was hyped. OOP has concrete benefits that are readily demonstrable and applicable.
OOP is what allows your operating system to support different kinds of file systems, network adapters, network protocols, I/O devices and so on.
It's unimaginable that the read() system call in your kernel would contain a giant switch() on device type leading to device-specific code, which has to be maintained each time a new driver is added.
With OOP I can add a new datatype easily, but when I want to extend the behavior of that type I now need to go to M different places. With a functional style I only need to do one. You're open on types but closed over behaviors. Functional styles are the opposite.
In some sense, I would even go as far as saying the idealized 'UNIX philosophy' is a degenerate example of this. We have a very limited set of types (the file) and a bunch of independently implemented behaviors. Imagine implementing sed or grep on a per-file (or per filesystem) basis.
These both get really interesting when you consider libraries/user extensibility, since unrelated actors could now add either new types or new methods. Most languages just punt on this by banning one or the other.
A pattern matching style would allow me to add a new sql() system call to query into the filesystem. Look at how much trouble there is adding new features to CSS, TCP, Java, etc trying to coordinate among so many different actors.
Or consider the case of a programming language AST. I can make a pretty printer, an interpreter, an optimizer, a type checker, a distributed program runner. But trying to do that with an OOP style is much harder for a large AST.
At the end of the day, we have NxM (type, behavior) pairs and there are pros/cons to each way of slicing them.
> I still think OO provides a pretty easy mental framework for programming
Very true. It's a practical solution to a complex problem. However, when systems get complex, it becomes very hard to find the right object / type to bottle up logic. Perhaps, then, a mix of OO and functional is the solution.
I'm really sick of these 'why blah sucks' posts. Clearly OOP works for a lot of people. If it doesn't work for you, don't use it.
My personal feeling is that FP works better when the problem domain is more data oriented, requiring transformation of data streams whereas OOP is good when the problem domain is about simulating or modeling where you want to think about interacting agents of some kind.
The whole 'X is one true way' argument is narrow sighted. I feel the problem should always precede the solution
When I was a tutor (TA) at university (collage) here in aus, I marked assignments from my students. We used an automated test suite to check correctness. I went over each assignment to subjectively assess code style. I would open the first assignment which scored full marks with the test suite and find it was a clean 500 line long implementation. Full marks. The next submission also got full marks, but it did it by spending only 200 lines. How? Was it overly terse? No... it looked clean and decent too. I would go back and look at the first submission and wonder - if you asked “could you throw away 60% of this codebase without sacrificing readability?” I would have answered of course not. But I would have been wrong. Silently, uncorrectably wrong if not for my unique opportunity.
In the programming work you do regularly, do you think there is a way you could structure your code which would allow you to do 60% less work, without sacrificing readability? If there is, would you know? The answer was clearly yes in the web world when jquery was king. Or cgi-bin on Apache. Is it still true today? Can we do better still?
If there is, it would probably demand a critical look at the way we approach our problems, and how we structure logic and data. The value in articles like this is to point at that question. For what it’s worth, I agree with Joe Armstrong and others who have been heavily critical of OO. Software which goes all-in on OO’s ideas of inheritance and encapsulation seems to consistently spend 2-3x as many lines of code to get anything done, compared with other approaches. (Looking at you, enterprise java.)
You’re right - the problem should proceed the solution. But our tools are the medium through which we think of that solution. They matter deeply.
I think this is mostly a reflection of the thing Java/C#/C++ have popularized as “OOP”: if one uses Common Lisp’s CLOS, much of the boilerplate associated with “design patterns” and OO architecture evaporates.
Yes absolutely. The article was written around 2000 when Java was the new sexy thing. When Joe talks about OOP being overhyped, he wasn’t talking about Rust’s traits or Common Lisp. He’s speaking about the hype around Java and C++, and the then-lauded three pillars of OO: Encapsulation, Inheritance and Polymorphism.
Not all OO works that way. In retrospect, inheritance was probably a mistake. And as far as I can tell, modern “OO-lite” coding styles focussing on composition over inheritance work pretty well. Alan Kay: “When I invented object oriented programming, C++ was not what I had in mind.”
I learned OOP in 1992, using Turbo Pascal 5.5, and got hold of Turbo Pascal 6.0 with Turbo Vision, shortly thereafter.
My follow up OOP languages until 2000 were C++ alongside OWL, VCL, MFC. Clipper 5.x, Eiffel, Sather, Modula-3, Oberon variants, Smalltalk, CLOS, SWI Prolog, Delphi and naturally Java.
In 1999 I got a signed copy from the ECOOP 1999 proceedings, full of alternative OOP approaches.
We should strive to actually provide proper bases to CS students, instead of market fads.
My inclination is to say that inheritance isn't the mistake, the mistake is making methods part of a class: my experience with inheritance in CL is that having generic functions/methods as their own first-class construct makes inheritance less of a minefield.
The post was "why OO sucks," not "why nobody should ever use OO." The distinction is important because everything sucks a little bit—especially in computer programming.
Understanding the objections to various programming paradigms can help improve how you use them, by having an awareness of what others consider potential minefields. (And who knows, maybe the arguments will change your mind. You shouldn't be so quick to prejudge the material.)
> I'm really sick of these 'why blah sucks' posts. Clearly OOP works for a lot of people. If it doesn't work for you, don't use it.
Most of us work in teams. If people I work with believe something that's not true, then it directly effects my work. If the majority of the profession believes something, it significantly effects my entire career. I don't actually hate OOP, although I have criticisms, but the attitude of "if you don't like it go somewhere else" is missing the point. Criticism isn't meant to be mean or nasty, it's meant to point out bad thinking for the benefit of us all. What bothers ME is the positivity police on hacker news that thinks that "if you don't have anything nice to say don't say anything at all" applies to all of life.
I wonder if it would be constructive to modify the OP's statement a little bit. You can get there from here using any approach (as long as it is Turing complete ;-) ). Some approaches will work better than others, but optimising your approach first and convincing others second is putting the cart before the horse. Having a happy team that works well together is going to provide at least an order of magnitude more ROI than choosing the best approach. Compromising on your approach to make others happy will almost certainly pay off hugely. Get good at as many types of approaches as you can so that you can take advantage of those payoffs. The cult of "I must use the absolute best approach for this problem, everyone else be damned" is one that leads to misery IMHO (especially if it turns out that your "best approach" isn't, which happens most of the time in my experience ;-) ).
These days many languages are so expressive you can establish a dominant paradigm in the part of the code you work on. We have a kind of micro-level choice that even the most authoritarian code-reviews can't largely stamp out (provided that the interface you provide is in harmony with the rest of your organization.)
Please refrain from such self-centered, flippant dismissals. They tend to get in the way of potentially good discussions.
>>Clearly OOP works for a lot of people.
Something can work for a lot of people, and still suck.
>> If it doesn't work for you, don't use it.
Language and tool choices in our industry are made by a tiny minority. In fact, sometimes the people making those decisions are not even developers themselves! From that perspective alone, articles like this one are valuable.
Aside from that though, the question isn't whether OOP "works" or not. Rather, it is when it works, and for how long, until you run into a myriad of problems, such as leaky abstractions or inheritance hell. These are worth discussing. If you disagree, you can move on. No need to voice your misgivings.
Please refrain from telling others to refrain. The negatives are worth discussing, but not the positives? I want to hear everyone's thoughts not only yours.
I agree with a lot of the arguments made in this post. However I think there is an assumption that programming is programming and there aren’t different problem sets to solve. To me, UI development lends itself naturally to the everything is an object paradigm. I need a button to start event x. Hmm the button needs to know what size it is and other miscellaneous attributes about itself. Oh it also should have a function that handles what to do when the button is pressed. Hey that sounds a lot like an object that has state, data structure and functions all in one. Now as we traverse the environment to basic CRUD type applications, OOP can be overkill. If all you want to do is pull x number of rows out of a database and put them in a dataset, do we really need objects for this? Nope. Most of the time the objects for these types of problems are just wrappers around an array of pointers. Creating DAO type objects is a lot of the time more trouble than it’s worth, but if were already in an OO paradigm, why switch?
Binding data and functions together beats operating on global data visible to everything. One of the big wins of OOP is less exposed global data.
A big problem with OOP in C++ was that it wasn't opaque enough. The headers needed to use an object contain info only useful to the object's private functions. This creates recompile hell.
Multiple inheritance is seldom worth the headaches.
Overriding a member function needs declaration support. The usual cases are 1) the parent function needs to be called first, 2) the parent function needs to be called last, and 3) the parent function needs to not be called at all. Which case is to be used is properly a property of the parent, but in most OOP languages, it's expressed as code in the child.
The use cases for generics are rather limited. Collections, yes. Beyond that, it's usually someone getting cute. That way leads to the "metaprogramming" mess. Go has a few built in parameterized types - maps, arrays, and channels. Go2 may have generics, but the designers are struggling with what to add that doesn't take them down the rabbit hole.
Objects are probably more useful than some of the things invented to replace objects, like "traits".
That's a good idea! Although... I wonder if you would get tired of passing around the same piece of data between such functions all the time? Here's a crazy idea: what if we pass the data implicitly to such functions? Like, we could pretend that the data was passed to the function as a hidden argument, and we could call this argument "self" or "this"! :O
Or! You could just pass in a data structure without methods! The problem with global variables is implicit dependencies, and with “this” or “self” you now have slightly scoped implicit dependencies. I say slightly because you still have to worry about base classes. It’s the same problem just more contained.
> Binding data and functions together beats operating on global data visible to everything. One of the big wins of OOP is less exposed global data.
“Passing state as parameters” is what solved the “everything operating on global state” problem. Binding functions and state permitted polymorphism/abstraction.
> Multiple inheritance is seldom worth the headaches.
Single inheritance is never worth the headaches. :)
> Objects are probably more useful than some of the things invented to replace objects, like "traits".
Traits don’t replace objects; they are interfaces with static dispatch semantics.
There are so many kind of polymorphism. Many of them do not require objects. Executing a closure is a form of polymorphism. As is manipulating a generic type.
> The “hide the state from the programmer” option chosen by OOPLs is the worst possible choice
When you code C and write something as simple as
fwrite( "Hello world", 1, 11, stream );
that line of code changes state of the file stream object in CRT, state of file caches in OS, state of B-tree nodes in file system driver, state of disk firmware, state of NAND flash chips…
It’s not just OS and drivers. Any sufficiently complex software is built by layering abstractions on top of each other.
Many problems that need to be solved have very complex state, often related to IO or GUI. You don’t have other choice but to use some form of OOP and hide lower-level implementation details behind abstractions, otherwise you won’t be able to get anything done due to overwhelming amount of state to reason about.
> program hiding state in layers of objects is often counterproductive
Do you think it was counterproductive to hide OS-specific file handles, CRT level of caching, and many other things CRT does behind these opaque FILE* object pointers?
Such hiding allowed to have very similar and quite easy to use <stdio.h> across all platforms.
The implementation of these fopen and fwrite functions relies on kernel calls like open/write on Linux and CreateFile/WriteFile on Windows. Do you think it was counterproductive to hide all the lower-level stuff like OS caches, inodes (linux) / NTFS (windows), and the rest of them, behind these OS kernel calls?
If you’ll decide to unhide them, your OS won’t have any security at all, the OS file system cache is security sensitive. Modern OSes implement strict access control while they hide that piece of state, every file has permissions, every process accessing them has user identity, and the OS checks these things when process access files or file system cache.
> Do you think it was counterproductive to hide OS-specific file handles, CRT level of caching, and many other things CRT does behind these opaque FILE* object pointers?
To answer that question for parent, no, he obviously doesn't think so. And yes, streams are basically OOP (they are implemented using "method dispatch"). Streams are one of the few successful abstractions out there.
And when I say "successful" I mostly mean "necessary". Because they're leaky. Consider fseek(3), ftell(3), fileno(3), fflush(3), fclose(3), fsync(2), posix_fadvise(2), isatty(2), fcntl(2), ioctl(2) and many more. All operations that are not supported for all streams, but necessary for some (like file system files, network connections, terminals devices...). Essentially when you use any of these you concede to break the abstraction.
It's incredibly hard to handle these streams correctly, and the vast majority of programs are faulty in that they don't handle most error conditions correctly. In some cases, it isn't even possible to handle errors through the API! See close(2)
So again, not saying we shouldn't have a stream abstraction. But abstractions come at a huge price, and therefore it's incredibly misguided to make every program a layering of thousands of pseudo-isolated objects that bring their own bug-prone abstractions and idiosyncrasies.
> Streams are one of the few successful abstractions out there.
I don’t think they’re few. I think all modern software uses OOP-based abstractions heavily.
For example, for GUIs, it’s visual trees everywhere. HTML DOM is the most widely used one now, but pretty much every GUI framework have something conceptually similar, even 30 years old WinAPI is built on very OOP-like concepts, you have HWND handles but the state is completely hidden and can only be accessed by calling functions.
> Because they're leaky.
No abstraction is 100% waterproof, but some are useful enough for many practical applications, despite they leak occasionally.
These particular file I/O API is less leaky on Windows, but they still leak, and there’re many weird low-level APIs to bypass these abstractions: DeviceIoControl, defragmentation API, backup API, and many others.
> abstractions come at a huge price
I’m not convinced the price is > 0.
We can always skip any abstraction we don’t like and use the next lower-level one. We don’t often do that because developing code based on these lower-level things is often more expensive than using the abstractions, despite they leak.
That’s why people use Electron & JavaScript, Java & .NET, game engines. They all made from large count of the layers you’re talking about, and when they leak that’s indeed can be very expensive to diagnose and fix/workaround. However, dropping them and developing something from lower levels is usually way more expensive.
The last Electron app I've used (one of these new chat apps, forgot the name) was dog slow (on my $700 computer from 2018), and gave my computer a hang up (probably triggered a graphics driver issue, or sth related with too much memory consumption).
I don’t like Electron too much, and I don’t program Electron apps (I mostly code C++ and C#), but I saw good products built on Electron. For example, visual studio code is fine.
It’s the same story with all high-level tools. They’re easier to use and have lower barrier to entry, used by less skilled programmers who fail to deliver good products.
For example, Unity3D has very low barrier and there’re many low-quality games built with that. But if developers know what they’re doing it is possible to build great games, e.g. cities skyline.
Another example why hiding state can be a good thing. Consider following code:
void* ptr = malloc( 32 );
No OOP is involved here, yet the malloc function operates on a critically important piece of mutable global state. The state is quite complex due to numerous reasons: performance, fragmentation, multithreading & deadlocks, alignment… Just look at ptmalloc (linux) or jemalloc (BSD). I think it’s undocumented on Windows but I’m sure it’s at least as complex, search for “low fragmentation heap” which is the default one since Vista.
I don’t think hiding that state in layers of abstractions is counterproductive, I think it was good, due to the following reasons.
1. In 99% cases makes it easier to write code. Heap state is very complex in practice, and just like file IO, it spawns across user mode state of your process and OS kernel state. Hiding the state all together allows programmers to reason in terms of very simple APIs, it’s just 2 functions malloc/free with very clear purpose each.
2. Hiding that state makes bugs in your code less likely to corrupt it. If you instead have something like void* malloc( size_t bytesCount, struct HeapState *processHeap ) it becomes easier to accidentally corrupt that state.
3. Hiding that state decouples consuming code from implementation. On Linux, that state is hidden deep inside libc library. You can replace the complete implementation of that state, or even eliminate it if you want so, the consuming code will run fine as long as malloc/free API stays stable. Quite often, code that uses explicit state becomes dependent on the particular implementation of that state. Much less likely to happen when the state is hidden behind some API, OOP or not doesn’t matter.
Absolutely should you wrap malloc in any moderately sized project! But clearly that's not OOP. We need to differentiate between abstract data types (method dispatch, which I would normally count as OOP) and just making functions to "reuse" code. In the latter case, there is only one possible implementation, and the implementation can expose as much or as little implementation details as is appropriate -- and in general, there won't be any abstraction leaks.
> and in general, there won't be any abstraction leaks.
For malloc, address space fragmentation is a common issue for 32-bit processes. Multithreading issues are common, when 2 threads access same cache line, it’s even slower than accessing main RAM. Randomized nature of the allocator causes performance problems.
Every time people bother writing custom allocators, they do that because malloc/free abstraction was not good enough for them, i.e. has leaked.
If I make a function called allocate_new_foo() that is not an abstraction (in the sense I mentioned - i.e. method dispatching). It's plain old boring programming - specifying in only one place how foo's should be allocated. And each caller is expected to have a good understanding what kind of allocation to expect - i.e. pool allocation or whatever.
I wouldn't consider this an abstraction at all, and it doesn't leak nearly as much as a "streams" abstraction - if at all.
I think this may be another one of those swearing-in-church type opinions, but I think C++ is excellent as a compromise here. It's got its faults, but C++ allows use of objects when necessary or helpful without the ridiculously pedantic OO of Java. It is quite nice to be able to use it when convenient while not having to do so when stupid.
I can't speak to python as much, as I haven't really used it, but I've heard a lot of good things about it (recently read a comment saying it was simply good at nothing but good enough at everything).
There are other reasons I like it: huge library support, ability to do low-level stuff for performance but use higher-level features for an easier time, etc.
Mostly, I've yet to see something better that has the maturity, libraries, depth of talent (!), etc. that doesn't have problems of its own. It's certainly not perfect, but there are reasons it has remained so popular in many fields.
Maybe because no body had commented as such until now? I am rather practical about this: I don't really like Java because it forces things that make more sense as functions to be under an object. What if I just want a simple global utility function, and don't want to make every single class inherit from a class containing only that? It's a lot of unnecessary hassle for something like that. I'm not saying OO is bad, I said it was too pedantic. I like languages which allow you to do what makes sense.
> You could also make a class of global functions and use that
I'm not sure I understand your issue with doing this. You need to put your global (i.e. public static) functions in classes not because Java is forcing OO practices into everything, but because classes effectively serve as Java's translation units. I think they serve this purpose pretty well in practice.
Classes being Java's only translation units are one of the downsides that I'm mentioning. Really my objection is more that it doesn't allow use of the right tool for the job (which is not always an object).
I'm not sure I see where objects enter into this. I mean, you seem to be asking for something like this,
public unit Util {
public void foo() {
// do things
}
}
which would compile down into a bytecode-containing artifact which we could call a unit file, which the JVM would load at runtime using some sort of unit loader. Callers could import the Util unit and then invoke foo with the statement
Util.foo();
But then, I don't see the diffence between the above and the following,
public class Util {
public static void foo() {
// do things
}
// If we're really pedantic, we can ban construction of Util instances
private Util() {}
}
which compiles down into a bytecode-containing artifact called a class file, which the JVM loads at runtime using a class loader. Callers can import the Util class and then invoke foo with the statement
Util.foo();
What's this downside you speak of? Is there something a different kind of translation unit would do that a class currently doesn't?
There's no 'object' involved here. First you said this requires subclassing, which it doesn't. Now you're handwaving about a feature that's specifically there to allow for things like plain global functions - a static method has no instance, there's no dynamic dispatch and it can't be overridden. People often point out the facility's utter un-OO-ness. The OO equivalent would be a class method which Java doesn't support at all.
I have seen a lot of posts bashing OOP by FP advocates. They write so many points explaining why OOP sucks, however, strangely few of them try to present why and how FP can be better in those points. Most of them assume that FP is unconditionally better in all aspects of coding, so they don't even try to explain.
They also falsely assume that bad OOP practices in the industry == OOP as a whole, full stop. Arguments against OOP tend to be strawmen arguments.
I mean, sure, Java taught bad OOP to millions of developers in the 90s, early 00s, and even today. Sure, the legacy of bad OOP lives on today.
That doesn't say anything about the principles of OOP when applied properly, when done in languages that avoid the pitfalls of bad OOP, and when the developer avoids the very patterns that tend to fall into criticism.
(incoming FP advocate to call my statement a strawman argument in 3, 2, 1…)
That's not entirely accurate. Abstract data types (e.g. ML modules) can also be modeled as existential types and don't have the same problems.
I think it's the particular combination of only having nominal typing, only allowing the oo-flavor of data abstraction, and encouraging programming with state and using inheritance for code reuse that leads to the object-spaghetti nonsense in heavily-oo Java/C++ codebases.
I just mean that (at least in Haskell) existential types provide information hiding & combine functions with data. In fact, they're implemented with dynamic dispatch, just as with OOP.
The problem is not that OO languages have these features; the problem is that they lack other features common to functional languages.
I've felt that way for years. OO perhaps makes sense for certain things, mainly things that someone else writes and you use as a library. Using OO more widely leads to very inflexible code, and since specs are not inflexible, the result is disaster.
> In an OOPL I have to choose some base object in which I will define the ubiquitous data structure. All other objects that want to use this data structure must inherit this object.
So... that's not actually how it works though, right?
It was. Criticisms like this led to implementing hybrid-OO systems. Those are what we now call OO. He’s talking about Smalltalk, Self, that sort of thing.
Objects should be used when you need to maintain invariants between members, or need an opaque stateful "thing" like a resource handle. They shouldn't be used to structure a program, but act like any other type in a functional or procedural setting.
An object should also have completely private members and no concept of inheritance. This greatly simplifies thinking about and designing objects. There should be no distinction between a regular and special method like a constructor or destructor. They should essentially be simple constructs with no magic that force interacts with a particular kind of data through functions that maintain the invariants. That's it. That aligns with the message passing origin of OO and Erlang in that there is no "breaking the rules" to get to an objects state.
This is basically how they work in Rust and Go. The key in both of these languages is the use of traits and interfaces to obtain special behavior and polymorphism. Neither of them forces a composite type to be completely public or private though, and Go limits visibility to the package level.
I used to really hate OO, but I started to think about what the parts I really disliked about it, and saw that in Erlang, Haskell, etc there is always a need for an "object" whether it be a GenServer or a Monad. Java and C++ just took the concept too far, and took an absolutist approach that most features need to be part of the OO machinery, although Java is by far the worst. Objects have their place, but they should be just as common as any of other type, and should not be used to think of program structure in most cases.
I don't mind object oriented programming too much: Occasionally, at most a small fraction of the time, it helps me write good code, and then I use it. Otherwise I don't.
Here are some of the things I'd like to see in programming languages:
(1) Some semantics that admit some useful static analysis, that is, tell me some useful things about my code, e.g., for a variable, where does it get used and where might it get changed?
(2) Offer me some useful code transformation properties. For this, a start is the relatively powerful scope of names constructs in PL/I: There can just drop into the source code another function and know that in the more common and important respects that function being there won't hurt anything else already there. So, if I have some function I like for something or other, then I can just drop it in.
There's a lot more: The main point is just to try, so that given 100,000 lines of typing of code, some tools can tell me where the lines of spaghetti and the meatballs are!!!!
So, that code is a system and treat it as such, i.e., instrument the thing and tell me what is going on, both statically before it runs and as it runs.
I have worked on systems that had turned into the classic 'big ball of OOP spaghetti'.
Likewise I've also had to deal with a 'big ball of structured programming spaghetti'.
My take on the difference in these two types of systems is they are generally the result of too much coding with too little design.
Personally, I think the OOP approach does have a tendency to turn into that spaghetti ball more easily, only because it needs more up front design.
Most of places I've worked who adopted an OO approach have tended to focus predominantly on the OOP with little or no attention to the OOD and that tends to be a recipe for disaster.
The biggest problem of OO languages for me is - it always depends.
Composition or inheritance or template or concrete? It depends.
Extract methods? It depends.
Design patterns? It depends.
Although to master good OO design - knowing what and when to do something - is surely an art, it doesn't handle dynamic business model very well.
If I just express the business, I cannot handle requirement change. If I want to deal with some future change, I have to do a lot of non-business oriented design, making the code less expressive.
And most of the OO language don't have algebra data types, which makes expressing business model more awkward.
Object oriented programming isn't about an individual programmer.
It's about creating discrete APIs so that a bunch of different programmers can work on different aspects or sections of code.
It's an organizing principle for discrete elements that encapsulates internal state. You don't need to know your datastore is SQL or Redis, or Redis caching SQL data, or marshalled JSON, you just call Users.getUser(id), and get a User Object with a defined API.
OO is in many ways just microservices for a single codebase.
I think people ignore the importance of OO in terms of unit and integration testing.
Discrete apis and encapsulation don’t require OO at all. For instance in C you can easily encapsulate things by just not exposing it in the header file. Even things like data structures can be encapsulated by just returning a handle/pointer to a structure and not exposing the structure itself. The only thing OO gets you is embedding functions in the data structure.
Not sure where people are getting the idea that I'm saying that OO is the only way to do abstraction and encapsulation.
I'm saying it is a way, and a sometimes useful one. Certainly when I'm thinking about a SQL schema, I'm often thinking about it using OO design principles, even though SQL is functional.
IF you don't like data and functions bound together, then use a CLOS-like object system with generic functions that don't belong to classes, but have methods whose arguments are merely specialized by class types.
Functions and data are in fact tied together. Almost any thing in mathematics is described in terms of some collection of properties which are elements of sets, and operations. The operations are part of the representation.
A Universal Turing machine is the tape, the symbols, the rules of how the head moves and reads and writes symbols, all together.
My lukewarm enthusiasm for OO stems from a different set of reasons than these. I view its main benefit to be the way it reinforces encapsulation as a design pattern. But good code with clearly organized functions and data structures can achieve that just as well.
I don't have anything against it, and wielded with skill I see its benefit. But after 30 years of programming I think you can achieve many of those benefits more simply without it.
I think this article is quite old, so I can forgive the author for being out of touch with modern OO language practices. Nonetheless, I find myself disagreeing with almost all of his arguments.
When he talks about state for example, I assume he means mutable state. Everyone knows this is best avoided if possible. The vast majority of OO languages provide mechanisms to avoid mutability e.g. data classes or keywords to make references constant.
He also rails against private state specifically. I assume again that he means private mutable state. This is generally a bad idea and is accepted pretty uncontroversially as a bad idea. The principle of data hiding in general however is definitely not a bad idea. Being able to enforce scoping rules on classes/functions/members is not meant to facilitate creating 'black box' entities whose internal mutations are difficult to reason about, but is instead meant to provide guarantees about dependencies between different parts of a system, i.e. Separation of Concerns.
Regarding 'everything is an object', in many cases this is perfectly acceptable as it guarantees some sort of basic interface that all entities conform to e.g. being able to ask an entity to provide a hash code, or to carry out a comparison check with some other entity. Some modern languages like Kotlin even eschew the concept of primitives altogether and just make everything an object.
Finally, regarding his point about OO languages having data type definitions all over the place, I must be misunderstanding something here... Why on Earth would you want all your data type definitions in one place?
Anyway, just wanted to give my $0.02 lest anyone get the impression that these are 'knockout punches' against any and all OO languages.
> Everyone knows this is best avoided if possible. The vast majority of OO languages provide mechanisms to avoid mutability e.g. data classes or keywords to make references constant.
Well, clearly not everyone knows it.
My biggest complaint about languages that incorporate functional features is that immutability is most useful when it’s the 99% use case.
When it’s a special tag that may or may not be used within the library/code base you’re using, you don’t get the cognitive advantage of feeling confident you can judge the output of a function by reasoning about its input.
My brain is small. I need all the crutches I can get, and referential transparency is a huge advantage.
Yes, I agree, I don't understand why modern languages don't make references immutable by default. The best I've seen is symmetry between val and var. Talking about safe defaults, I remember being very disappointed when Jetbrains changed the default scope in Kotlin to be public....
> provide mechanisms to avoid mutability e.g. data classes or keywords to make references constant.
So, functional programming?
> guarantees some sort of basic interface that all entities conform to e.g. being able to ask an entity to provide a hash code, or to carry out a comparison check with some other entity.
Isn't that terrible? You want to provide a generic interface to calculate a hash or perform comparison: Now the thing has to be an object! If it's a primitive type, you better wrap it in an object now. If it's an existing class, you have to extend it. It becomes an object of a different class.
Or, instead, you could just have type classes to let the same apply to anything whatsoever, as long as it's a type, without the need for it to be an object. Heck, in some cases, the type could be a function instead.
> He also rails against private state specifically. I assume again that he means private mutable state. This is generally a bad idea and is accepted pretty uncontroversially as a bad idea.
Here's a program that has some mutable state. Do you want that mutable state to be private or public? Do you want anybody to be able to change it, or do you want all changes to have to go through public methods?
Given that there are programs that necessarily have mutable state, why in the world would you not want that state to be private?
Sorry, I realise on reading that that my response makes it sound like I'm an opponent of making mutable state private. I totally agree with you, what I was trying to say was that mutable state in general is considered as something to be avoided if possible, however, if it is necessary it should be hidden i.e. made private
> basic interface that all entities conform to e.g. being able to ask an entity to provide a hash code, or to carry out a comparison check with some other entity
What do you do with values that can't be meaningfully hashed/compared? Java is the only place I've come across this odd idea that everything is hashable, and I've never thought of it as a positive thing.
Well, the problem with a value not having a corresponding hash is that it can't be used in any code that relies upon it having a hash. E.g. in Java, there are many hash based data structures that work for any object as they all have hashes. Even if there is no meaningful hash, by default the memory address can be used. This may mean you don't gain the performance benefits of a hash based data structure, but at least it is compatible with the type system.
Data structures shall be bound to functions. It increases modularity. That does not mean common code shall not be refactored and be standalone.
Data types are instantiated to objects. When in Erlang a time variable is declared, that variable is an object. OOP has data types too, called classes, and having everything to be an object does not mean it has to be the same data type.
In some languages classes can be in the same file, so point #3 is actually about specific implementations.
Inside point #3 he also mentions inheritance. Not all OOP languages require inheritance from a specific class.
I also disagree with his 4th point about hidden state: hidden state enhances modularization tremendously. I don't need to see how something is implemented, treating it as a black box increases my chances of using it correctly through a predefined interface. If the interface is lacking or is not defined correctly, that's an entirely different problem and it's not due to hidden state.
Objection 1:
Given a queue, priority queue, and stack, how do those data structures "just exist", without the behavior associated with them? The interactaction with them is their defining characteristic.
Objection 2: I agree not everything should be an object. However where do you draw the line? To me a timestamp is a perfectly reasonable object. I can see the pros and cons of "3" being an object as well.
Objection 3: I like things to be organized. What benfit do I get by jumbling everything together in one spot?
Objection 4: Yes they do, pretending they don't doesn't help. Even in Haskell, files and sockets are represented as an opaque handle to hidden state. It very much looks like a OO interface in a non OO language. Given the three data structures in objection 1, hiding the state allows for a smaller surface area to prove that the data structures are correctly implemented. Allowing random access to their state does not improve them.
> I think this is a fundamental error since functions and data structures belong in totally different worlds.
This is my biggest issue with the article. Blurring the boundary between program and data is one of the core principles at the heart of computer science, and was crucial to the works of Turing, Church, et al, that birthed computer science as a field.
That's an interesting observation, however this discussion is at an abstraction level far removed from those basic concepts. No one is arguing whether OOP can be compiled into logic instructions that are better or worse than some other programming paradigm, we've got that figured out. We know how to convert just about any system of organizing data and functions into the billions of individual transactions that a CPU has to make to execute that program. Copying a byte value from one place into a register, interpreting another byte as an instruction and the next byte as the address for the result and the next as a hardware switch... These are the concepts Turing and Von Neumann figured out, but they're irrelevant to the modern day question of how to organize thousands or millions of lines of code to minimize errors and maximize understanding and efficiency.
In fact, the OOP question isn't as much about science and technology as it is a social question concerning the best way to organize logic and information as a profession. What's the best way to make sure programmers don't screw things up? Is it to make sure the most incompetent developer can be somewhat productive with simple concepts, or are those simplified concepts inherently flawed? Or is it somewhere in the middle?
The "problem" with OOP is that it was originally oversold for the wrong reasons, and it took the industry a while to learn when and where to use it and where not to.
Around the late 1980's OOP was hyped as a way to do domain modelling. That is, modelling the nouns and verbs of the subject at hand (employees, invoices, etc.) While some still defend it for domain modeling, most have found it a poor fit for any non-trivial model.
But it did turn out a handy way to present and manage APIs to "external" libraries or services. This is largely how it's used today.
The lessons:
1. Use the right tool for the job, and no one tool is right for everything.
2. Test an idea in production for a while before drawing conclusions about what its good at and what its not.
Microservices is currently making similar mistakes, I would note. Those who don't learn history are bound to repeat it.
That's a bit weird to see a seasoned programmer harboring those views. I remember that when learning OOP I was equally confused. "I can do all that with functions and structures already." And indeed you can. There is nothing you can't technically do without OOP.
OOP is not a programming feature, it is a software engineering feature. It allows for cleaner APIs and higher levels of abstractions. IMO the core feature of OOP is less the ability to bind functions with data but the ability to overload operators.
Sure, you can have functions that have the "this" pointer as the first argument and have exactly the same things. You can add a bunch of flags and have the core features of inheritance.
You can. Now what do you prefer to write, when finding the middle of a 3d segment?
mid = (a + b)/2
or
mid = scalar_division(vector_add(a,b),2)
?
If you allow operators to be overloard, is there any good reason to not place them close to your structure definition? Isn't it a good practice to force these functions to be defined if your structure is?
The core idea of OOP is that it extends the language by allowing to build more abstract concepts in top of lower level abstractions.
That's a core misunderstanding that I keep seeing with low-level programmers. They want to see the implementation details of everything and refuse to obscure some behaviors and trust the libs/compiler makers.
People who spend more time in higher level algorithms can't bother with all the implementation details. When I do DL, I want the ability to concatenate layers of different types (that inherit the same generic type), be able to check their output(), manipulate tensors of floats, assign them a scalar value or multiply them by some.
You can go a very long way without using any kind of OOP and staying at the implementation level. I am actually in awe of how much one can do that way. But OOP is a tool for teams to work together without knowing the details of each other's parts and build increasingly complex abstractions.
The problem with placing operator overloads within the definition of one object is — which object do you look to when trying to add two different types of objects and the assumption is TypeA.+(TypeB) is the same as TypeB.+(TypeA)? You put them in an inheritance perhaps, but that might unnecessarily complicate your type classification. Okay, so you use some sort of interface, perhaps with a default implementation. Well, interfaces aren’t unique to OOP, but they do add a bit of confusion — when looking for an implementation you might now have 3 places to look (Possibly 4 mentions if default implementations aren’t allowed).
Really there’s an easy conclusion to this — allowing and preferring user defined operators or operator overloads is a hotly debated topic and one where there appears to be no right answer — about the only conclusion you can draw then, is that operator overload is not exclusive to OOP: https://softwareengineering.stackexchange.com/questions/1809...
I’d also suggest that with functional programs, there are limits to how much you should cram in to one program, and that a smart way to modularize your code would be to, for example, follow the Redux reducer pattern and build or compose larger state objects and operations from functions that operate on just parts of the state object—this way you isolate written changes in a similar way to private OOP variables, where it’s just not expected (or even “in scope”) to modify other parts of the global state. Additionally, you can as in OOP control access to state by encouraging the use of “selector functions” or not directly accessing state. You could make your functions take smaller typed structs of state, really, functional programming is at least as expressive as OOP programming. I’ll say that both allow you to make mistakes like mix concerns, or use lots of globals, perform magic with meta-programming or overloading, or not be expressive enough to create your own DSL in—OOP or Functional, these concerns are in my experience shared by basically any language.
Also outside of numerical work and some set stuff there's not really that many times you actually need to overload operators.. And you can easily extend languages without objects (Lisp)
It's always interesting to see "<insert popular paradigm/technology> sucks" headlines. Maybe that's also intrinsic to the popularity, on the one hand this got popular for a good reason most likely. But on the other many people might be very careless when using that paradigm and only inform themselves superficially - at best - about the background. That said, I really like how JS had been tamed through OO despite the FP approach being much more interesting.
When I use OO, my main motivation is also that it's easier to share code and debugging becomes much more predictable. (Aahh, it's in this encapsulated part of the code...)
Approaching OO with the mentality that it's easy is what makes OO looks bad. It doesn't get treated with the same focus/attention as other methodologies. This realization often comes in retrospect, after programming for many years and watching how you misunderstood OO and how your abstractions improved over time.
OO is not easy after all. You might be familiar with all the patterns which can give a sense that you know OO well, and you end up with pretty bad design.
The thing is ‘filename’ is a bad example for a an object.
Of course, it should be a string. However ‘File’ object is a better example that favors OOP. When you call ‘.read’ on a file object, you expect its content. It doesn’t matter if it’s a windows file, unix file, an IO string, or a web resource. When you call ‘.read’, it just work. Whereas in the functional world, you’ll have to 4 different ‘.read’ function calls with an if statement.
One of the many difference is that there is no inheritance, and implementations of traits can be added for types long after they are defined. Traits in rust are not like java interfaces.
It wasn't my example, but it's just disproving the notion that FP somehow means you need to write big if statements to dispatch your functions.
Traits are not functional-language specific except in that they are often used in FP to solve problems similar to those one might use inheritance to solve in OOP.
It's unfortunate you're being downvoted for your opinion instead of having other ideas presented to you so I'll do my best though I'm no expert.
> Whereas in the functional world, you’ll have to 4 different ‘.read’ function calls with an if statement.
You could do this but that's not idiomatic.
What you would preferably do is have a function that takes a read function as a parameter. This is essentially the same thing as what OO languages implicitly do.
For an example of generic functions you can take a look at map or fold/reduce. You'll see they don't need to use if statements in the way you described.
I don't know why people keep insisting on using generic words as identifiers when doing so makes it much harder to find something. If I enter "fwrite" into Google, the correct documentation pops right up. I can easily find all the places where file write occurs in my code. I can easily find all the places where network activities are triggered when the associated functions have unique names. Using a if statement and four different *_read function calls is not a shortcoming at all when you take test coverage into consideration.
I'm not sure polymorphism is the best justification for OO. There are plenty of functional paradigms that can abstract over file implementations just as easily as OO. I think the better argument is that a file represents a resource and thus has a more "thingly" character than other "dumb" data.
I have a different opinion about why OO was popular. I think it can work well for small programs. But it doesn't scale well to bigger problems. It just becomes too confusing. Once people used it for bigger things, possibly because programs got bigger with Moore's law, it can just be untenable. (But, then again, in some places it still can make sense.)
You are right, but not wrt small vs large. Instead, OO works well for artificial situations (classic case - UI frameworks) vs. real world situations (classic case - person isa contractor vs. person isa employee).
As soon as you start modeling real world situations (like bill being an employee but at the same time working a late shift as a contractor), then you're in the world of complex stuff that single inheritance, multiple inheritance, or anything less than insanely complex relationships requiring tens or hundreds of underlying entities to enact just can't handle.
IMO one reason for these fruitlesd ideological arguments -is OO good? Is it bad? - is that OO was originally sold using these real world examples. Which anyone who's worked on a real world system knows is BS.
One thing I find problematic about C++ style OOP is that classes are just bags of global-like variables. Unless you keep your classes very small, member data can be mutated in a dozen places in the class in a dozen ways, and it's just a big spaghetti mess. I understand that c++ code can be written cleanly and readably, but in most cases it's not.
Objected oriented programming became popular because a skilled software engineer could create an application outline/spec and then pass that on to lower skilled programmers who would implement the spec without completely fucking it up. It made it easy to separate out work among large teams and write tests (ok we are going to divide up the work between these implementations, each of which gets its own set of unit tests for each function) and standardized some of the high level interactions ahead of time (see: convert spec to code). It was never meant to be a surgical scalpel that a highly skilled developer could use to attain high individual productivity.
Also I highly disagree with the assertion that "Data structure and functions should not be bound together". I would argue that if you are using any sufficiently complex or niche data structure that is not part of some standard library or a primitive, you absolutely do want to bind functions to the datastructure. It is much better to define a data structure with a set of logical human-readable functions for what you want to do with it than to keep it as some collection of primitives or standard lib datastructures you pass around everywhere. Let's say you are writing a spell checker. You really want to abstract away all the ugly details of the trie and any other helper datastructures. You also may want to create an interface for some "word library" that you can extend with different solutions to test e.g. performance (if you are trying to optimize for caching). Object oriented programming really simplifies things in a scenario like this.
OO is good for programmers to model problems, not for yielding the best result. I find that thinking in terms of functional code (info being passed from one place to another; not diluted into a lot of objects with state) yields the best result in terms of stability and lack of bugs
What are the concepts today that smart people are convinced are the "right" way to do a thing that later will seem dubious?
- Microservices?
- Agile development?
I feel like I am not trendy enough to know what the latest development religions are.
I've always heard of Pascal as "the Java before Java" or atleast quite an OO language. And since I knew Java I never thought to even go and look at it. Did I miss something here?
OO is basically about modeling data. It doesn't make any statements about procedures, even if it includes operations on said data. If you expect more from OO, you will be disappointed.
Is it just me, or do these web articles written in plain <p> and <h*> tags just have some magical command over your attention as carrying some authority, like the author is so focused on his content that he doesn't a single mental cycle to waste on any more than the bare minimum UI to convey it?
The author died a couple of days ago in (I think) a bicycling accident. I imagine people are going through his old posts as they remember him or hear about him for the first time.
I agree with a lot of the criticisms in Dr. Armstrong's article, but I would say that they mainly affect languages such as Java that enforce OO; multi-paradigm languages that allow for OO but don't force it on you don't have these problems. I believe that OOP has valid uses.
> Objection 1. Data structure and functions should not be bound together
In languages like OCaml and C, one defines types separately from functions. Sometimes, however, I find that a function is best associated with a particular type. In OCaml, that's one purpose of modules. (The convention is to call the main type "t.") However, this organization shouldn't be forced on you.
> Objection 2. Everything has to be an object.
This objection only applies to languages like Java that force OO on you. In OCaml or C++, I don't have to use OO, but I still can if I want to.
> Objection 3. In an OOPL data type definitions are spread out all over the place.
Armstrong complains about having to decide what to inherit from when making a Time object. IMO this is a strawman; in Java, classes inherit from Object by default and I don't see any inconvenience. I do agree that it can be annoying that Time has to be object-oriented, and this is where the benefit of OO being voluntary comes in.
> Objection 4. Objects have private state.
I either disagree with or don't understand this objection. Dr. Armstrong lists three ways to deal with state.
His third option is to use pure functional programming. I see both the beauty and practical value in making functions be functions in the math sense, that is to say, mappings from elements of the domain to elements of the codomain. Then, you get equational reasoning. I am sympathetic to this idea. Nevertheless, languages like Haskell aren't for everybody.
His second option is to control access to mutable variables with scope. You can implement a pure function in an imperative way, and that can be a form of encapsulation. This is a good approach, too.
Dr. Armstrong's first option, which he says is the worst, is the idea that objects maintain hidden state which people control via methods. To me, this isn't about state, but rather encapsulation and abstraction. The idea is that when one, say, inserts into a hash table, the person doesn't think about the implementation ("hash the key and walk down the corresponding bucket"), the person thinks of the abstract problem domain ("map this key to this value"). The hash table interface hides its internal state behind the methods. Even in a programming language like C, one writes functions that operate on, and mutate, structs to abstract things away.
Although the second option is good, I see it as orthogonal to the first option, not a superior alternative. If you are going to confine state to within a function, ultimately, don't you have to do mutations on the local variables, and therefore call some kind of method or impure function on them? However, perhaps I am misunderstanding this fourth point.
A lot of arguments, including this submission, pit OOP against FP and argue that FP is superior to OOP. Even though I use FP, I also think that OOP can be useful. They are not opposites and don't have to conflict.
OCaml is infamous for its object layer, yet I ended up using it for a project. I wanted to describe UI "widgets" such that I could compose arbitrary different types of widgets. Before deciding on objects, I had considered first-class modules or a record of functions, but ultimately decided that objects were the most readable choice.
In my OCaml program, I mainly viewed OOP as a way to abstract different types into a common idea. Widgets are all different, and they have different behaviors, but they all have a position in space, a length, and a width. One can also reposition them (mutation!). To me, the purpose of OO is to express ideas such as these. I note that when I used OOP here, I cared less about state and more about the abstraction of what a "widget" was, via dynamic dispatch. It is bad when the language forces OO on you, but it has valid uses.
Largely agree with this except the initial objection "Data structure and functions should not be bound together" and the "Why was OO popular?" which is missing the response - because using the notion that "objects" from the real world encapsulate state and behaviour was not only a solid premise, but it was an attempt to interface the computer world with the real. This was not a failed concept. It makes some sense. Just very few teams find the discipline to model their core domain this way.
Which brings be back to the original objection. I think this is true most of the time, except when it's not - which is your core Domain Model. The first 1/2 of the Blue Book[1] lays out straightforward means to arrange code, functions, data/state and related behaviours in a way which can be managed and maintained over time. This is pretty important as most folks who've spent any length of time maintaining vast applications will know that it's incredibly hard to reason about a first-class concept in an application without clear boundaries around said concept, it's structures and it's behaviour. Most of us are unlucky and find this scattered across the landscape. Few applications take the focus to "model" these concepts clearly.
Does this modelling have to be done with "Domain Model", or DDD, or something else that can be loosely coupled with OOD - probably not. But another developer absolutely has to be able to reason about said structures and behaviour. They have to be able to read it easily and grok it quickly. And having done that, they don't want to be surprised by some distant missing element, 20 calls or 1000 lines or 15 modules (repos, submodules, etc, etc) away! This is possibly the biggest time-sink and therefore "cost" of development. One could also take this further and postulate that about 1/2 of us are employed as a direct result of applications whose core concepts are so poorly designed or hard to reason about, that a massive volume of work (time?) is dedicated to unwinding the ambiguity that results.
I don't want to suggest that OOP or OOD/DDD/{other modelling process} would necessarily fix this, but the attempt to clarify and find a means to make modelling these critical concepts easier and less costly is admirable IMO.
It's ok if your infrastructure takes a different approach, or is "functional" or "dynamic" in nature. If your test suite uses completely different patterns and paradigms because the tooling makes that easy then - awesome! But if the core model/concepts of your application are hard to understand, reason about, and therefore maintain, then you're pretty fucked.
OO doesn't "suck". It's spirit is just largely lost and like many other things in life, it's been hijacked and mutilated into something many of us come to loathe because we've never seen it deliver on the promises. I guess we will be having this conversation again in another decade about something else that's hugely popular right now.
I suggest OO is a useful abstraction because it maps well to the universe we are trying to describe, especially with respect to state.
Rule of thumb?
Anything that can reasonably be done in FP should be done there, in 'library' like conditions (some people use the term 'FP core' - I think more as lib).
The point is to parse out as much of the problem into highly individual parts that can be built/tested on their own.
Think of parsers, pre-processors, utilities like lodash, node libraries.
These things tend to be truly stateless.
Then OO for the inherently stateful stuff.
I also think the discussion might be coloured by the type of dev: UI is fundamentally stateful. Very much so and there's no avoiding it. OO lends very well to abstractions such as 'button' and 'text field' etc.. Backend, maybe less so, depending on.
FP and OO are overly complicated, and it is not feasible in large industries. It is also a kind of production method that emphasizes personal technology in hand workshops. Personal technology greatly affects product quality and extremely unreliable production methods.FP and OO are actually taking a detour, highly embellished and ineffectual, and produce all kinds of fail.
Most OO systems are just simulations of real-world surface phenomena, and the whole system, like a mess, I think it is not good method of OO to simulate the real world, but to design it correctly with an abstract refined data model as a prototype. For example, the ggplot2 of the R language, the system is clear, with the perfect data model as the prototype. So a good OO system is more inclined to a data flow system, and I think Ggplot2 is more likely to be a data-driven plot system if OO was not in vogue at the time.
Excessive application of OO and FP design patterns, in addition to increasing complexity and error probability, reduce performance, without any benefit. Complex networks of relationships between objects in the OO system are also difficult to maintain.
I tend to construct systems with the simplest concepts and the most basic techniques, syntax, and functions. Used to implement my mind, The Pure Function Pipeline Data Flow is the simplest, stable, reliable and readable.. There is a great poet Bai Juyi in China. even illiteracy understands and appreciates his poetry. I hope that my code can be understood by the junior programmer even in the most complicated system.
For me, programming is the process of designing a data model that is simple and fluent in manipulation. More than 80% functions of my project is ->> threading macro code block, each step is simple, verifiable, replaceable, testable, pluggable, extensible, and easy to implement multithreading. The clojure threading macro provides language-level support for PurefunctionPipeline&Dataflow.
..."I wrote a an article, a blog thing, years ago - Why object oriented programming is silly. I mainly wanted to provoke people with it. They had a quite interesting response to that and I managed to annoy a lot of people, which was part of the intention actually. I started wondering about what object oriented programming was and I thought Erlang wasn't object oriented, it was a functional programming language.
Then, my thesis supervisor said "But you're wrong, Erlang is extremely object oriented". He said object oriented languages aren't object oriented. I might think, though I'm not quite sure if I believe this or not, but Erlang might be the only object oriented language because the 3 tenets of object oriented programming are that it's based on message passing, that you have isolation between objects and have polymorphism.
Alan Kay himself wrote this famous thing and said "The notion of object oriented programming is completely misunderstood. It's not about objects and classes, it's all about messages". He wrote that and he said that the initial reaction to object oriented programming was to overemphasize the classes and methods and under emphasize the messages and if we talk much more about messages then it would be a lot nicer. The original Smalltalk was always talking about objects and you sent messages to them and they responded by sending messages back."
See https://www.infoq.com/interviews/johnson-armstrong-oop (2010) for the full answer (and more), it's worth a read.