I purposefully left out the word "versus" because I do not believe there is such a strong dichotomy and to avoid flamewars.
I had several stages about "objects". The first was the collision 50 years ago in my first weeks of (ARPA) grad school of my background in math, molecular biology, systems and programming, etc., with Sketchpad, Simula, and the proposed ARPAnet. This led to an observation that almost certainly wasn't original -- it was almost a tautology -- that since you could divide up a computer into virtual computers intercommunicating ad infinitum you would (a) retain full power of expression, and (b) always be able to model anything that could be modeled, and (c) be able to scale cosmically beyond existing ways to divide up computers. I loved this. Time sharing "processes" were already manifestations of such virtual machines but they lacked pragmatic universality because of their overheads (so find ways to get rid of the overheads ...)
Though you could model anything -- including data structures -- that was (to me) not even close to the point (it got you back into the soup). The big deal was encapsulation and messaging to provide loose couplings that would work under extreme scaling (in manners reminiscent of Biology and Ecology).
A second stage was to mix in "the Lisp world" of Lisp itself, McCarthy's ideas about robots and temporal logics, the AI work going on within ARPA (especially at MIT), and especially Carl Hewitt's PLANNER language. One idea was that objects could be like servers and could be goal-oriented with PLANNER-type goals as the interface language.
A third stage were a series of Smalltalks at Parc that attempted to find a pragmatic balance between what was inevitably needed in the future and what could be done on the Alto at Parc (with 128K bytes of memory, half of which was used for the display!). This was done in partnership with Dan Ingalls and other talented folks in our group. The idealist in me gritted my teeth, but the practical results were good.
A fourth stage (at Parc) was to deeply revisit the temporal logic and "world-line" ideas (more on this below).
A fifth stage was to seriously think about scaling again, and to look at e.g. Gelernter's Linda "coordination language" as an approach to do loose coupling via description matching in a general publish and describe manner. I still like this idea, and would like to see it advanced to the point where objects can actually "negotiate meaning" with each other.
McCarthy's Temporal Logic: "Real Functions in Time"
There's lots of context from the past that will help understanding the points of view presented here. I will refer to this and that in passing, and then try to provide a list of some of the references (I think of this as "basic CS knowledge" but much of it will likely be encountered for the first time here).
Most of my ways of thinking about all this ultimately trace their paths back to John McCarthy in the late 50s. John was an excellent mathematician and logician. He wanted to be able to do consistent reasoning himself -- and he wanted his programs and robots to be able to do the same. Robots were a key, because he wanted a robot to be in Philadelphia at one time and in New York at another. In an ordinary logic this is a problem. But John fixed it by adding an extra parameter to all "facts" that represented the "time frame" when a fact was true. This created a simple temporal logic, with a visualization of "collections of facts" as stacked "layers" of world-lines.
This can easily be generalized to world-lines of "variables", "data", "objects" etc. From the individual point of view "values" are replaced by "histories" of values, and from the system point of view the whole system is represented by its stable state at each time the system is between computations. Simula later used a weaker, but useful version of this.
I should also mention Christopher Strachey -- a great fan of Lisp and McCarthy -- who realized that many kinds of programming could be unified and also be made safer by always using "old" values (from the previous frame) to make new values, which are installed in a the new frame. He realized this by looking at how clean "tail recursion" was in Lisp, and then saw that it could be written much more understandably as a kind of loop involving what looked like assignment statements, but in which the right hand side took values from time t and the variables assigned into existed in time t+1 (and only one such assignment could be made). This unified functional programming and "imperative like" programming via simulating time as well as state.
And let me just mention the programming language Lucid, by Ashcroft and Wadge, which extended many of Strachey's ideas ...
It's also worth looking at "atomic transactions" on data bases as a very similar idea with "coarse grain". Nothing ever gets smashed, instead things are organized so that new versions are created in a non-destructive way without race conditions. There is a history of versions.
The key notion here is that "time is a good idea" -- we want it, and we want to deal with it in safe and reasonable ways -- and most if not all of those ways can be purely functional transitions between sequences of stable world-line states.
The just computed stable state is very useful. It will never be changed again -- so it represents a "version" of the system simulation -- and it can be safely used as value sources for the functional transitions to the next stable state. It can also be used as sources for creating visualizations of the world at that instant. The history can be used for debugging, undos, roll-backs, etc.
In this model -- again partly from McCarthy, Strachey, Simula, etc., -- "time doesn't exist between stable states": the "clock" only advances when each new state is completed. The CPU itself doesn't act as a clock as far as programs are concerned.
This gives rise to a very simple way to do deterministic relationships that has an intrinsic and clean model of time.
For a variety of reasons -- none of them very good -- this way of being safe lost out in the 60s in favor of allowing race conditions in imperative programming and then trying to protect against them using terrible semaphores, etc which can lead to lock ups.
I've mentioned a little about my sequence of thoughts about objects. At some point, anyone interested in messaging between objects who knew about Lisp, would have to be drawn to "apply" and to notice that a kind of object (a lambda "thing", which could be a closure) was bound to parameters (which kind of looked like a message). This got deeper if one was aware of how Lisp 1.5 had been implemented with the possibility of late bound parameter evaluation -- FEXPRs rather than EXPRs -- the unevaluated expressions could be passed as parameters and evaled later. This allowed the ungainly "special forms" (like the conditional) to be dispensed with, they could be written as a kind of vanilla lazy function.
By using the temporal modeling mentioned above, one could loosen the "gears" of "eval-apply" and get functional relationships between temporal layers via safe messaging.
So, because I've always liked the "simulation perspective" on computing, I think of "objects" and "functions" as being complementary ideas and not at odds at all. (I have many other motivations on the side, including always wondering what a good language for children should be epistemologically ... but that's another story.)
This reminds me of the way Datomic stores data.
John Carmack came across this idea at some point - in 'Functional programming in C++' , he writes:
"When you start thinking about running, say, all the characters in a game world in parallel, it starts sinking in that the object-oriented approach of updating objects has some deep difficulties in parallel environments. Maybe if all of the object just referenced a read only version of the world state, and we copied over the updated version at the end of the frame… Hey, wait a minute…"
I don't know if he got this explicitly from the Strachey lineage, or if it's the inevitably approach if you try to use immutability to tackle concurrency.
Again: it's really worth looking at Dave Reed's MIT thesis ...
I think that as many ideas are crystallized, it is no longer helpful to characterize every pure and/or temporal-logic approach as functional; indeed, the two notions may often be at odds. (Pure) functional programming, at least as (sort-of) defined today. FP is not directly about modeling time and certainly does not impose synchronicity (a clock), but is more about the idea of denotational semantics, where computations are imagined as functions, and function application imposes an equivalence relation (the function application is said to be equal to its result in this relation). This is very different from the philosophy of synchronous languages.
: The modern language Céu is based on Esterel: http://ceu-lang.org/
Terms often lose their meanings as they become religions and more rigid choices and styles. The two in question here are certainly "object-oriented" and "functional programming"!
Absolutely, especially when they're vague. But I still think that the synchronous or temporal-logic based languages (which I love!) are very much distinct from FP (at least in the Haskell "tradition"), although they share with each other -- and not with the classical imperative approach -- the explicitness of state transitions: the synchronous/TL model with its clocks/steps, and the FP model with its (hideous) monads. They also "feel" very different (Esterel/Céu and even TLA+ feel more imperative than functional). It is not surprising that the killer-app for the FP approach is compilers (and other code analysis tools), while the killer-app for synchronous/TL approach is reactive systems, two application classes that are at the opposite, extreme ends of Pnueli's (I think) classification of programs (transformational, interactive and reactive).
Most people building commercial distributed systems today accept a different set of tradeoffs, generally preferring fast performance and the avoidance of possible deadlocks even though it means giving up some of the nice semantics and consistency of the operations.
Do you still think it is possible to build a system with the organically-growable scale of the internet but with some of the nice, predictable pseudotime-like semantics?
And the next round about 15 years ago -- for Croquet -- which included Dave Reed, David A Smith, Andreas Raab, myself, etc. There were three or four different versions of this mechanism, all of which worked, and the designs progressively got better.
There are several sets of issues and tradeoffs. Reed's initial ideas in the 70s were about what an operating system for the whole Internet should look like. One of the many interesting ideas he proposed was to deal with the slow latencies and large numbers of potential users by distributing cloned computations organized by pseudo-time, and using the slow Internet for just input and synch deltas. This is what we first implemented in Croquet in the early 2000s when a typical good Internet round trip for a ping was about 80-100 milliseconds. This is good enough to create a massively parallel game like World of Warcraft (or even a flight simulator game) without any servers at all by just using the distributed machines that the game players are using to play on.
Further versions did more, and more comprehensively.
(A side note is that Croquet without the real-time graphics and interaction, etc. automatically provides a distributed object-oriented data base, etc. Pseudo-time is the larger idea underneath protected distributed atomic transaction data-bases, etc.)
Your last question is a good one, and it has been discussed over the years. Can it be done? How much would need to be done, and where? Etc.
The object work we did at Parc was a part of the network thinking going on in the ARPA/Parc community. And there was an eventual intent to think of higher-level network entities. A quite good intermediate real implementation of "net level entities" was the LOCUS system by Gerry Popek and his group at UCLA after he spent a year at Parc thinking about what network systems should be like (this was really good, and the first several chapters of his book are well worth reading). It was a migratory load balancing idea over heterogeneous machine types, which was quite complementary to the pseudo-time ideas.
I would like to see a highly talented group take another pass at your last question. Resilience to scaling is quite difficult to predict ahead of time, and it's good to do real implementations. Still, the 9 orders of magnitude that the Internet needed was possible with the scheme that the ARPA/Parc community had, and Croquet worked perhaps better than it should have (especially with just a few people working on it).
In any case, I think it is a bit crazy to not use pseudo-time if you know about it, and it's a bit amateurish not to be aware of it. In the middle are religions whose identities are built on various "one true way"s (and this is the opposite of the way any science should be conducted).
I only recently read David Reed's thesis and did not know about this later work until you mentioned it. Almost all of the links I can find about Croquet no longer work. (OpenCroquet and OpenCobalt websites seem to have disappeared.) Can you recommend reading for someone who wanted to understand the design decisions made in the several versions of Croquet?
I have been experimenting with layering pseudo-time on top of asynchronous message-passing in Erlang. One useful trick which I have not yet seen published is that you can have two-dimensional pseudo-times, so that one running system can track multiple chains of causation. (You would not want this happening all the time, since the whole point is to get a consistent history everywhere. But it is very useful for developers.)
An example: Let's say I have an existing object and a clone of it where I have changed some of the methods' behavior, but I want to understand all the consequences of these differences across all the distributed objects they cooperate with. I can send them the same message with different y-coordinate pseudo-times. In a system that relied on wall clock time, the side-effects of these two message sends would merely step on each other non-deterministically. But with pseudo-time, their effects are causally distinct throughout the system and can be dynamically inspected, making it a lot easier to build a human understanding of the behavior of the distributed system.
But what you are doing is good and important. We like a quotation by Goethe: "We should all share in the excitement of discovery without vain attempts to claim priority". In other words, it doesn't matter when good ideas happen -- it is enough that they do happen.
...and it is a nice way to make games:
 'How to Design Worlds: Imaginative Programming in DrScheme' http://world.cs.brown.edu 2008, Felleisen, Findler, Fisler, Flatt, Krishnamurthi
The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said "Master, I have heard that objects are a very good thing - is this true?" Qc Na looked pityingly at his student and replied, "Foolish pupil - objects are merely a poor man's closures."
Chastised, Anton took his leave from his master and returned to his cell, intent on studying closures. He carefully read the entire "Lambda: The Ultimate..." series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress.
On his next walk with Qc Na, Anton attempted to impress his master by saying "Master, I have diligently studied the matter, and now understand that objects are truly a poor man's closures." Qc Na responded by hitting Anton with his stick, saying "When will you learn? Closures are a poor man's object." At that moment, Anton became enlightened.
I was surprised at having come to a similar understanding after implementing a very basic OO system in Scheme some years ago.
 - http://c2.com/cgi/wiki?ClosuresAndObjectsAreEquivalent
I thought "why don't they just keep this simple and use functions instead of all these cumbersome classes!"
Then I started to write something using function composition to keep things simple and got annoyed because sometimes I wanted some helper functions on my functions and it was cumbersome to do that. And I thought "I have made a mistake. The true nature of these things is objects."
And then I didn't actually become enlightened. Because everything's an object which is also a closure and we're all made of stardust. But none of that means anyone is going to understand or use my library. Sometimes it's necessary to give things names people expect and keep the higher order abstractions to myself.
Once you internalize the truth that Objects and Closures are two ways to express the same sort of things, your mind is free to craft wonderful solutions.
1. http://www.artima.com/weblogs/viewpost.jsp?thread=180638 (Inner Classes or Closures)
2. http://www.artima.com/weblogs/viewpost.jsp?thread=182412 (Clear, Consistent, and Concise Syntax (C3S) for Java)
3. https://www.artima.com/weblogs/viewpost.jsp?thread=202004 (Comparing Inner Class/Closure Proposals)
PS. I remember the Artima articles because I liked them at the time.
I think that's a decent way to explain java's lambda expressions to developers without fp experience, but it's not technically correct. Lambda expressions are compiled into static methods in the class file.
I think FP and OO are fundamentally different in how they approach programming: the way they break down problems, what they consider to be good code, and so on. OO views code as small unit of encapsulated state communicating with one another. FP views code as a series of transformations. OO code is not good FP code, and vice versa.
On a lower level you can find many similarities. OO patterns are largely FP language features, from a time when knowledge of FP was not very widespread. At least five of the Gang of Four patterns are first class functions is disguise. Going the other way, a very typical pattern in functional programming is to build up a description of what should happen, and then have an "interpreter" actually execute that description. These interpreters often look like "coalgebraic" structures, which is an FP term for ... objects. Now isn't that interesting?
In summary, the techniques often look very similar if you squint, but the fundamental way they approach programming is very different.
That's a very quick sketch, which is somewhat hazy on the details. Apologies for the brevity and lack of nuance.
I also really liked his talk on probabilistic programming: https://www.youtube.com/watch?v=e1Ykk_CqKTY
The questions for software development are about creativity and economy. The jury us still out on these to the best of my knowledge.
A few little things I've identified that I do, that aren't totally mainstream:
- Don't reassign parameters, in fact, don't reassign things at all, if you can avoid it; show your work. It makes for more expressive code, and you'll have fewer bugs.
- Think of your code as doing "evaluation", not "computation". Code should take a bunch of inputs, do some stuff to them, and then return an "evaluated" response, vs. the viewpoint of "doing a procedure".
- Try not to make functions side-effecting. Avoid "void" unless you really need it, and really, really avoid distant side-effects.
- Don't be afraid to return a new object that seems "large" from a method. If you do it right, you'll mostly be copying references anyway which is pretty cheap.
- Share data liberally by reference all over your code.
- Don't be afraid to use a bit of recursion.
On the other hand, I'm not afraid of a little grubby OO now and then, especially when dealing with I/O, file handles, network streams, etc.
All to say, I think the two styles are about 90% compatible but there are some parts that don't mesh well. I'm not afraid to mix them, though.
Regarding side effects, it might even be possible to create almost-side-effect-free server code; but the state of most native client frameworks are just not ready for this. State can be banished/abstracted into the lower levels, but it doesn't go away.
(Speaking as an iOS dev. There is React Native, but this only handles the view layer; good luck with stateless CoreData)
See in the HOPL... "The Development of the Emerald Programming Language"
Now, that means that the language is not forcing you to share without mutability, because you can leave off "const". That is, you have to be disciplined. But if you put "const" on the reference, the compiler won't permit you to modify the data through that reference.
For me, it's a definite "accent" that I have when I write code in other languages.
From my time with Smalltalk I found this pattern quite remarkable, though also at times hard to trace. You'd tell an object to do something, and it might access some object it knows about, maybe do a test or guard, and then it would tell the other object to go do that thing. And the other object might in turn delegate, many levels deep. The result could be really incredible and compact, though it also felt like it wasn't actually easier to understand. Kind of Ravioli Code (http://c2.com/cgi/wiki?RavioliCode), though not the kind of modern Ravioli Code you might see because someone takes DRY too seriously – instead it reminded me more of a mathematical proof. Mathematical proofs are really cool, but they do not contain the information necessary to reproduce them; a mathematical proof is the result of studying and pondering and struggling with a problem for a long time until you reduce it to something compact and beautiful. The result seems magical, but in part because you threw away all the struggling that produced the result. That's what the core Smalltalk class hierarchy felt like to me; beautiful, maybe not reproducible, not always accessible. All of which is a critique of one place and time of code, and OO is bigger than that, but I suspect it's also some of the result of created goal-oriented methods All The Way Down.
Functional programming can also be goal-oriented if you can reify all your goals, because ultimately that which isn't reified (that is, made into data or state) can't be the result of a function. I don't have as intuitive a sense there, is goal-oriented functional programming an ideal people hold?
Another Smalltalk lens on OO is message-oriented programming, where you have a set of objects or actors which pass messages, but never directly "touch" each other, and in some sense objects are more autonomous than a conventional understanding of OO. This does not map well to functional programming – the purpose of messages to autonomous objects is to effect state, to make changes in the world. Of course that's the purpose of all programming, so it's not crazy. In some sense functional programming lifts the purposeful and useful outcomes of a program out of the programming, and into the environment. I would consider that a stronger contrast with OO.
Some languages (spoken languages) construct sentences Subject->Verb->Object, others Subject->Object->Verb, etc, etc. The underlying relations between the parts are still there (there's an isomorphism in the graph), but the projection onto linearized words just takes a slightly different angle.
In OO languages, we treat objects as the important component, and functions as their children: bar.foo(). In FP languages, we treat functions as the important component, and objects as data passing through: (foo bar).
There's no difference in the relations, it's just a different projection from high-dimensional computation-space onto one-dimensional text that causes all sorts of confusion.
That's not to say the difference doesn't matter: projecting computation in a different way frames a different mental model for the programmer that might help or hurt, depending on the specific domain and on the specific problem.
But I think there's value in trying to see the zen in OO's and FP's isomorphism, and practicing switching projections in the same domain as a problem-solving technique.
These are very distinct from a FP approach. Note that in Scala, you can do 'pure FP' that looks very much like OO,
myData = oldData.change().map(x => x+1).filter(x => isTrue(x))
In the large, this is much harder to reason about! Even in Scala, a language which gives us OO, we (Verizon Labs) as an organization elected to lean hard on the FP side and shy away from the OO side of things.
There's a strict, technical definition of "functions which have no side effects", which defines a coding style. But there's also a more subtle, ambiguous definition of "functions as the primary and most irreducible building block of code", which defines a coding philosophy. In my opinion, FP is the latter, so I'd actually disagree and say that your Scala code isn't FP, since it emphasizes transformations on an object.
Maybe I'm in the minority here.
>so I'd actually disagree and say that your Scala code isn't FP, since it emphasizes transformations on an object.
What I wrote is no different than
change(oldData) |> map(x => x+1) |> filter(x => isTrue(x)
OO makes code understandable by encapsulating moving parts. FP makes code understandable by minimizing moving parts.
I think they can be used in a complementary manner. One area where functional thinking can be a benefit can be in code that is tested with a lot of mocks. Instead of having a bunch of objects that mutate state and that need to be mocked out, model that part of the program as a series of pure function transformations of data. It will be much easier to test and reason about.
https://www.cs.utexas.edu/users/wcook/papers/OOPvsADT/CookOO... is a paper that argues them as complementary to one another. It's worth reading in its entirety, but an excerpt from the conclusion:
A detailed analysis of the trade-offs involved must be made in order to choose whether an ADT or PDA ["object"] is better suited for a problem. Some relevant questions in this choice are: How likely is it that the system will be extended? Must the security of the system be preserved, even at the expense of extensibility? Is it possible the unforeseen interactions may be desired? Is the environment dynamically extensible, and must the additions interact in complex ways with existing code? How much efficiency is required? Is it likely that there will be a large number of binary operations with complex behavior?
In general, ADTs tend towards tight coupling, yielding more predictability, performance, etc. Objects tend towards looser coupling, allowing more flexibility and extensibility. This is a reasonable microcosm of the relationship between FP and OO.
As I mention in the opening I do not think there is a strong dichotomy between functional and OO programming. I believe historical accident created "Objects vs functions", and by historical accident I mean Java. It is unclear to me why would you not allow to create plain simple functions and force everything in objects. Probably a misunderstanding of "everything is an object".
Famously Alan Kay emphasizes "message passing" as the key characteristic of object orientation. Because every discussion about OOP and functional programming eventually ends up talking about State as the main discussion point, I fail to note how a definition of OO based on message passing contrasts the benefits of functional programming around state management.
In fact the way I see it is that when we have more units communicating with each other they form a system that should be functionally pure from an external point of view but can be OO in its internal implementation.
EDIT: Continuing my train of thought: the problem with OO in it's current common form is that objects often/always leak details about their state. So you have a tuple (state, [actions]) where often using an action is invalid given the tuple's state.
For a long time, lambda-calculus has been seen by a lot of programming
language researchers as the foundation of programming to which every
other paradigm has to be reduced. This research tradition produced
much work on encoding object orientation (whether class-based or
object based) into lambda-calculus. Ideally such encodings should be
sound and complete. That has proven difficult. In parallel, functional
languages acquired OO extensions. The first serious implementation to
make this (sort of) work was OCaml.
Arguably, all that work wasn't quite convincing. Eventually, it became
clear that lambda-calculus isn't the basis of all other programming
paradigms. Instead people tried the reverse: encoding lambda-calculus into
Scala really nails this, and validates your intuition that there isn't a dichotomy. Instead FP is a special case of OO.
If you want to understand the relationship between OO and FP there is
no better way at this point to learn Scala and study how
lambda-calculus (both at program and type level) is encoded into
class-based object orientation.
First, the factually incorrect:
> For a long time, lambda-calculus has been seen by a lot of programming language researchers as the foundation of programming to which every other paradigm has to be reduced.
... not really. The lambda calculus is a way to represent computation, which is Turing equivalent; it can encode any computation which a Turing machine can. You can reduce any program to either a Turing machine or the lambda calculus, as they are both capable of representing any computation. The lambda calculus is not inherently functional, or OO, it's much lower level. To be fair, lambda calculus did lead directly to Lisp, which has long been associated with functional programming (though it is also not inherently functional - see Emacs Lisp or Common Lisp).
> Eventually, it became clear that lambda-calculus isn't the basis of all other programming paradigms.
Again, no. The lambda calculus is as much a basis of all other programming paradigms as the turing machine is.
> encoding lambda-calculus into object-orientation
After the above explanation, hopefully it's clear that this statement doesn't really make sense. Again, I'm guessing that this is mostly due to incorrect use of the term "lambda calculus" - you likely meant to use another term.
On to the stuff I just disagree with:
> FP is a special case of OO
Highly disagree. FP and OO are simply two different methods of organizing and connecting code and data - in OO, you have objects that encapsulate state and expose functions to the outside world that modify and retrieve that state. In FP, you write programs as series of functions that transform data. The fundamental difference is that in OO you are treating data and its related functions as a fundamental unit together and structure your program around connecting objects together, and in FP your program is essentially a data pipeline. The other distinctions between OO and FP (immutable vs mutable state, generic collection interfaces vs specialized interfaces for different kinds of data) stem from that fundamental dichotomy.
Scala may combine the two, and may do so elegantly (I really don't know, I've never used it), but even if its particular kind of FP is encoded in an OO system, that doesn't imply that FP at large is a special case of OO. In fact, I'd argue the opposite: a method on an object is simply a function with an implied argument (the object) in a scope where that argument is inacessible from outside. You can see this clearly in pre-ES6 JS, where that's literally how an object is constructed (in the Crockford style - the prototype style is a bit different).
Yes, all programming languages are equivalent in the sense of the Church-Turing thesis. From the point of view of PL research that
means equivalency with TMs is an uninteresting criterion for comparing
PLs. Consequently, PL researchers have developed finer tools for comparing
languages, typically centring around compositionality. An encoding
enc from language L1 to language L2 is compositional whenever P =
f(P1, ..., Pn) is a program, then enc(P) is of the form
enc(f)(enc(P1), ..., enc(Pn)). This idea, that I present here in a
very simplified form, has been studied quite deeply, and in this sense
it has been difficult to reduce OO to FP, unlike the other way around.
And yes, lambda-calculus is the essence of FP. Indeed there is no PL paradigm that is so closely connected with a single formalism as FP.
a method on an object is simply ...
In summary: OO is substantially more expressive than FP.
The goals of OO features such as method overriding is accomplished by typeclasses or implicits in FP.
It's just that the default polymorphic mechanism for FP is parametric polymorphism, and the default mechanism for OO is ad-hoc polymorphism.
OO didn't have parametric polymorphism initially, and added it with Generics.
FP didn't have ad-hoc polymophism initially, and added it later with typeclasses or implicits.
Choosing OO or FP really just puts you one of the two side of the https://en.wikipedia.org/wiki/Expression_problem . They are more complimentary than proper subsets of each other.
I could counter that support for parametric polymorphism is poor in OO langs as much as you claim support for ad-hoc polymorphism is poor in FP langs. (but I don't place much stock in any of the aforementioned claims outside of some kind of artifical, academic context)
Real-world problems that aren't an academic edge-cases will be supported by both FP and OO.
Multimethods give you ad-hoc polymorphism in an uptyped setting.
In the same way assembly is substantially more expressive than C. Expressiveness in the way you've defined here is the least interesting aspect of a programming language.
Since it's well established that OO and FP are equivalent in the sorts of things you can compute, you ideally want a paradigm that makes it easy and intuitive to code up computations versus one that is strictly 'expressive'.
> This view is too simplistic, and doesn't scale up to include OO features such as method overriding.
I'm pretty sure this is simply false. Modern Haskell with GADTs for example can easily encode any aspect of OO, including method overriding. Do you have a counter-example in mind?
counter-example in mind?
Two programming paradigms may be both Turing complete (and therefore formally equivalent to each other), but the paradigms may still be quite different. Yes, at the level of lambda calculus structured programming and functional programming and OO are all equivalent. That doesn't make structured programming into functional programming, though. As paradigms, they are radically different.
Can you easily reduce a program with concurrency to either of these? If not then you are leaving out a large class of programs and systems. I think this is why Milner invented the pi calculus (a calculus of objects)
It's also why you have Greenspun's Tenth Rule, and why there is probably a Smalltalk corollary for FP languages. You eventually need to be able to solve both halfs of the expression problem.
FP is when you really love functions, you put functions in functions.
Of course, if you like both, you can put functions into objects and objects into functions.
I would actually argue that OCaml has a better OO system than most OO languages, but it looks and feel quite different (it's "functionalish": immutable by default, objects are structural and can be classeless, etc).
In the end, despite its strength, it's used very little in the OCaml community: Modules are better 95% of the time.
The gold standard for doing nice encapsulation, abstraction and separation of concern is neither functions nor objects, it's ML modules.
I think an understanding of state is fundamental - one thing that brought this home to me was a talk I watched given by Sophie Wilson [https://en.m.wikipedia.org/wiki/Sophie_Wilson] in which she displayed an image of a cpu. The image showed a 'path' across the cpu that was the current state, the next image showed a different path describing a different state. The path was a lot like a bolt of lightening.
An understanding of functional programming (FP) matches this a bit more directly than OO as FP is a pipeline of functions that describes state change and it is easy to reason about the state at any point in the pipeline. OO represents a real world object so you are encouraged to think about the problem you are modelling rather than the how to represent different states. But obviously in the end they both describe state and logic that describes state transition.
Pre-OO (and FP), we had structured programming, where the data lived in variables or structs, and anybody at all in the entire program could alter the data. That turned out to be difficult to maintain, because if the data wound up in a bad state, any line in the program could have changed it.
OO (as typically used) meant that an "object" owned the data, and nobody else could touch it. That meant that, if the data wound up in a bad state, only a very small section of the program could have (directly) changed it. That was a huge improvement.
In (pure) FP, the data disappears. You don't talk about it directly (except within one function, where you have bindings to it). The data disappears into the plumbing (the connections between functions), and nobody changes it. If it was computed correctly when created, it's correct forever, because nobody can change it.
In this sense, both FP and OO are trying to solve one of the biggest problems with the old structured paradigm.
- OOP code is usually separated and organized by the principal data it operates on, i.e. the instance of a class, and more specifically the instance's fields.
- FP code is usually separated and organized by the more abstract or philosophical responsibility of the functions, and they may not always share the same arguments.
- Everything is an object
- Inheritance (an object inherits all features of its class and super classes)
- Polymorphism (one object can replace another if the required class is one of its own or its super classes)
- Encapsulation (an object exposes only what it is meant to expose (black box))
- Functions can be used as values (therefore it is possible to write higher order functions)
- There is no state/variables, so that any function call with its input parameters define the result and can therefore be cached quite easily
So in theory it should be possible to combine the two concepts, but in my opinion it doesn't make much sense as they have different strengths and weaknesses (e.g. introducing state to FP is a very bad idea...).
Sadly many languages try to combine both concepts and implement both poorly and on the other hand there are too few people who actually understand the concepts and use them appropriately.
As a rule of thumb:
- Take OOP if you have to model the relationship and interaction between things (objects)
- Take FP if you have to calculate something (like Excel formulas do)
I've heard variations on that a lot, but have not found it to be true at all. I've tried 3 times over the years as I got better at programming to write a server for a little mmorpg in OO languages. Each time it turned into a buggy, unmaintainable mess. On my fourth try I was using ocaml because I thought a better type system could help. Again the OO version I wrote initially was buggy and terrible. But being ocaml, I could rewrite bits and pieces in a functional style and improve it gradually. I ended up with no OO code at all, and very little imperative code (which was only for performance reasons). This is the very definition of "you have to model the relationship and interaction between things (objects)", but OOP was terrible and FP is great.
FP can model relationships/interactions OK. They just have new tools/techniques to do it. For example, instead of mutating things, you create a copy with the change. You then push the actual state/mutation to the edges of your system (e.g. the database), and thus hopefully make it easier to reason about. But it can be slower or hard to change things deep in an object graph. For slowness - we have persistent collections which re-use unchanged parts of the collection and implement performant "copying". For deep structures, Haskell & Scala have lenses.
It's still an ongoing exploration, with people borrowing/discovering new techniques, but I think it's working quite well and I'm happy to keep exploring.
In the other direction, I've been following Rust. I find it very interesting that it's got a very good type system, but still has mutation. Instead of avoiding mutation, they track ownership in the type system, and thus keep it easier to reason about. By doing this, they make the the performance/memory management easier to reason about, without making it too hard to reason about where effects are happening.
I joke that if Smalltalk were invented today, it would be called a "functional/object hybrid language for creating Internal DSLs"
In general, objects are supposed to be processes (in the general sense, not necessarily OS processes). Some of those processes might be equivalent to full Turing Machines, but most will be much more limited; it just depends on what kind of formal grammar they recognize, which is key to understanding objects: They are just processes that accept some input and dispose of it as either code (something that is translated and executed) or data (something that is stored and forwarded). Crucially, in either case, objects do work by sending and responding to messages. This is why Dr. Kay says that the actual big idea of OOP is message passing.
Once I learned to appreciate real OOP, I realized that there is actually a natural simpatico between FP and OOP rather than a dichotomy. Consider that if you're doing real OOP, you're going to end up designing lots of little languages and then implementing interpreters/compilers for them. Well, it turns out that it's comically easy to build compilers and interpreters using statically-typed languages like ML (F# in my case).
Now days, I use F#, C# (immutably w/o setters), and SQL to simulate an environment that recapitulates what I think is the real power of Smalltalk/OOP: It can rather directly simulate the process of scientific progress itself. I now find myself designing classes that represent falsifiable statements  (e.g rather than a class 'SearchEngine', I'd have something like 'LowLatencySearchEngine'). Just as scientific progress is often concomitant with theories and models that have ever more symmetry, so it is the case with the graph of classes that describe my system: I somehow end up with fewer central points of failure. For instance, in the case of 'SearchEngine' vs 'LowLatencySearch' engine, the latter made it apparent that my class should be initialized with a set of search providers, and that it should keep latency statistics (thus being able to falsify the proposition). The end result is that the proposition represented by the class naturally contraindicates that I should rely on any one search engine.
There's a lot more to say about this, but the last thing I'll mention is that FP languages like ML teach you that there is a difference between a class and a (static) type. The big idea is that types correspond to logical propositions (the so-called Curry-Howard Isomorphism), but I naturally ended up there just by trying to more effectively simulate the process of science, which brings us full circle to Alan Kay's big idea: computers can simulate any process, they can even simulate better computers. I would argue the scientific process is the best computer of all.
>FP languages like ML teach you that there is a difference between a class and a (static) type
DING DING DING THIS MAN GETS THE PRIZE. I've been trying to come up with a nice way to articulate this. When writing Java I try to think of "value classes" as structs, and classes as "actors".
FP emphasizes immutability. So in OO + FP, objects are treated as immutable data. Functions create new objects instead of modifying the same object (avoiding side effects).
For convenience you can create functions to access parts of an object instead of creating new objects to achieve the same.
Answer: Nothing but syntax and ideology.
Does a duck, walks like a duck and talks like a duck, because it is a duck (being) ?
Is a duck, a duck because it walks like a duck and talks like a duck (doing) ?
That depends on proper code organization, identifiers, formatting, documentation, coupling, good practices in general.
"Functional Programming Principles in Scala"
"Functional Program Design in Scala"
(There are two more free courses in the "specialization" - only the certificates and the capstone project cost money - original announcement http://www.scala-lang.org/blog/2016/05/23/scala-moocs-specia...)