
Alan Kay's: Why is FP seen as the opposite of OOP rather than an addition? - mpweiher
https://www.quora.com/Why-is-functional-programming-seen-as-the-opposite-of-OOP-rather-than-an-addition-to-it/answer/Alan-Kay-11?share=a52bda70
======
m_mueller
Maybe common FP languages are missing a control structure somewhere that would
encapsulate the progression from stable state to stable state (the transition
of which is purely functional) while the glue between transitions is as free
as procedural code? Let's take my field, HPC, as an example, the same would
hold for game engines: There isn't really a reason why Kernels couldn't be
functional and performant, in fact CUDA is in many ways a data oriented
language, however _outside_ the kernels there is often the need to swap
pointers for example, so one can save unnecessary memory copies between
timesteps. The data in question often tends towards tens or hundreds of
gigabytes (the size of the cluster node memory), so you _really_ don't want to
allocate that fresh between kernels. I don't know enough of Haskell to decide
whether Monads would be of any help, but these pure FP languages just don't
seem to allow me to ask how something is going to perform and how to approach
the maximum possible performance for a given algorithm, their syntax only
seems to care about functional correctness.

~~~
dirkt
Monads are actually the "missing" control structure that encapsulates
progression from stable state to stable state, and moreover allows you to
combine etc. those progressions in a functional fashion.

If you look at the type signature of (>>=) (monadic bind), it says

(>>=) :: Monad m => m a -> (a -> m b) -> m b

which, if you instantiate m with a monad that represents state, you you can
read as "given a progression 'm a' from state to state with return value of
type 'a', and a function that uses this value to construct a progression from
state to state with a return value of type 'b', then I can compose both
progressions, and get a progression from state to state with return value
'b'". (These "return values" are used to transfer information from one
progression to the next progression, and are the only thing that take a bit
getting used to).

So pure FP languages do allow you to specify step-by-step progression.

Seen from this point of view, Alan Kay's statement 'there is no reason to
invent “monads” in FP' is just nonsense, because he has no experience with
pure FPLs, and seems to be only familiar with Lisp, which is a very impure ad-
hoc FP.

So while I agree with his conclusion "both OOP and functional computation can
be completely compatible (and should be!)." (see e.g. Ocaml for a fusion of
both concepts), and I am not particularly impressed by his reasoning.

~~~
m_mueller
Thank you for your answer, but what about my specific example with pointer
swapping? I‘d need a way to control memory handling in a procedural way in
between pure state transitions. It probably goes further than that still - FP
in general hides from me what happens with intermediate results on the
hardware - I‘d need a way to have it use preallocated memory pools, because
these operations are repeated in a very regular pattern (once per timestep or
solver iteration) and thus always occupy the same memory. Versioning of
previous states can (due to hardware limitations) only be done by file output.
I‘ve never seen a pure FP language covering this because it seems to
fundamentally go against purity principles, but as long as this is the case I
reckon it‘s simply unusable for any but the smallest scale numerical
applications.

------
mmphosis
To me, these are the key points:

 _< more detail excluded here> This idea did not die, but it didn’t make it
into the standard computing fads of that day, or even today. The dominant fad
was to let the CPU run wild and try to protect with semaphores, etc. (These
have the problem of system lockup, etc., but this weak style still is
dominant.)_

I wonder what the <more detail excluded here> is?

 _More main stream is that big data systems used_ * _versions_ * _instead of
overwriting, and “atomic transactions” to avoid race conditions._

 _So: both OOP and functional computation can be completely compatible (and
should be!). There is no reason to munge state in objects, and there is no
reason to invent “monads” in FP. We just have to realize that “computers are
simulators” and figure out what to simulate._

------
rashkov
having a tough time following this as it seems to require familiarity with the
subject already. Anyone able to provide or link to a more introductory version
of this topic?

~~~
shalabhc
Seek out other answers on Quora by Alan Kay? Also check out Tea Time from the
Croquet project:
[https://en.wikipedia.org/wiki/Croquet_Project#Synchronizatio...](https://en.wikipedia.org/wiki/Croquet_Project#Synchronization_architecture)

IIUC one idea here is to apply the concept of 'reliable function' into the
object oriented world:

> For example, this would allow “real objects” to be world-lines of their
> stable states and they could get to their next stable state in a completely
> functional manner.

So [object_in_state_A + message_Y] => [object_in_state_B] is some change in an
object as a result of receiving a message. This 'transformation' is functional
i.e. always produces the same result, given the same input state and message
(similar to the pure function concept in FP). Also, the transformation 'rules'
only see exactly state_A and not some invalid in-between state. This allows
reasoning about these transformations clearly. The object is a history of
object states: [object_in_state_A -> object_in_state_B -> object_in_state_C]
etc. Note only the 'stable' i.e. valid states are exposed by the object
(they're unavailable during transformation) so any view is consistent.

Now to compose many such objects into larger system that is also consistent,
see the Virtual Time idea (David Jefferson) or the earlier Pseudo Time idea
(David Reed). An overview of Virtual Time is here:
[https://blog.acolyer.org/2015/08/20/virtual-
time/](https://blog.acolyer.org/2015/08/20/virtual-time/).

The rough idea is that all messages are stamped with a virtual timestamp - so
any receiver can determine the exact order in which to apply the incoming
messages. The whole system moves forward in virtual time, correcting for any
out-of-order delivery - the real time of each message isn't important anymore.

~~~
elcritch
Aside from the virtual Time idea, it strikes me as very similar to how Elixir
programs work. :-) Well Erlang as well but the Elixir language design
encourages maps and typed maps which resemble objects more so. It’d be neat if
you could re-wind your entire app with a virtual time (like clojure Scripts
debugging setup).

~~~
shalabhc
Right - Erlang does the 'objects and messaging' idea quite well. I'm not sure
if all effects of sending a message to an Erlang process are 'isolated'? E.g.
could a process write to the disk directly or would that also be manifested as
a message?

The result of having a time base universally embedded in all messages and
object histories is that there is no 'global' system state. You can only look
at the system at a specific snapshot, even while it may continue to evolve.

~~~
elcritch
Yes, actors in BEAM can modify files and there's a small bit of local state
which can be set without messages (the process dictionary). However, much like
Haskell you'd still need some point where real side effects could occur. It'd
be possible to do all file actions via specific actors using messages which
could then be modified to support "time-travel".

~~~
shalabhc
Right - you'd also need object 'histories', so handling every message in
effect produces a new version of the process.

