Hacker News new | comments | show | ask | jobs | submit login

Can someone explain to me Alan Kay's problem with monads in functional programming? He attacks the monad concept several times in this talk rather cryptically.



I obviously can't tell you what he's thinking, but I can make an educated guess:

In the video (and almost anywhere else you look) you'll note he talks about recursion on the concept of "computer" as a way to scale, because that way the parts are as powerful as the whole.

If you split that concept into the sub-notions of "procedures" and "data", you no longer have that recursive principle, and "functions" in this context are sufficiently equivalent to procedures.

Going the other way, functions don't really scale up, many if not most things in computing are not functions to be computed. One could argue that computers and "computer science" are misnamed, as computation is more the exception than the rule. "Data-around-shufflers" would be more appropriate.

I didn't see the exact monad references, but they look like ways of working around the fact that the "function" primitive is inappropriate. It is really nice, mind you, and has wonderful properties. Kind of like circles as the basis for astronomical orbits: they are "perfect", but the world isn't perfect in the same way, so trying to model the imperfect world with these perfect primitives leads to having to introduce epicycles/monads.

My 2 €¢

Edit: “Functions are nice, but you need to advance the state of the system over time” — https://www.youtube.com/watch?v=fhOHn9TClXY&feature=youtu.be...


> "Data-around-shufflers" would be more appropriate

In the French language we have "calculateur" which is the direct translation of "computer" but only commonly used in the context of scientific computing, and "ordinateur" which is the common name for computers... The meaning of "ordinateur" pretty close to "data-around-shufflers", from latin ordino ‎(“to order, to organize”) - https://en.wiktionary.org/wiki/ordinateur: "in its application to computing, [ordinateur] was coined by the professor of philology Jacques Perret in a letter dated 16 April 1955, in response to a request from IBM France, who believed the word calculateur was too restrictive in light of the possibilities of these machines (this is a very rare example of the creation of a neologism authenticated by dated letter)"


Great point! Due to the latin origin as you mentioned, in spanish there is this too: Ordenador. Would be more or less: "sorter" (implying that it does that to data). Very likely influenced by Perret's coined term translated to spanish


"Data-around-shufflers" <- that's so it!


Monads are the way to hide the fact that state change. Basically they allow you to abstract away the concept of time.

Alan Kay seems to push more in the direction of data as CRDTs or fully vector clocked, so that time is just an additional way to referenced some data.

It is close to the idea of separating time from space or to combine them in two.

In CS, we tend to consider a static vision of the world, and a function applied to it. When the truth is that the world exist at different time and is wildly different at each of this time. So a piece of data would be defined by its place in memory and the clock time (vector clock) it was in that memory. Closer to what CRDTs does in a way.


Another way to put it, with monads (or more generally FP) there is a separation of functions and data. With objects data is hidden behind procedural interfaces.

I'm not familiar with CRDT, can you compress the idea of how to "remember" a whole history of states with them and why are vector clocks important? The introduction on wikipedia reads like they are useful for (geographically) distributed processing. I can't really relate or contrast this to monads.


The problem with distribution is that things can happen simultaneously (or at least before the latency enable you to decide who is first), so your goal would be to be able to recreate an history despite that asynchronicity.

That is what Consistency in the CAP theorem hint at.

The whole idea is that if you define things not only by their state but also by the order this state has evolved, it enable you to know what is the most recent one but also to go back in time. It is especially useful if someone come late after you updated the state and you discover that their update should have happened before the most recent change in state.

Forget that idea of data being hidden behind procedural interface.

Another way to put it : You have a Dog that is dirty at 1600. So you decide to clean him and put him in a bath. He is now clean at 1610. Now you have another person (thread? computer? no idea) that come at 1620 and got the order to clean the Dog at 1550 because someone saw he was dirty. The person do not ask if the dog is dirty or not. He got an order and do it. You now have a dog that is being cleaned again.

With Alan Kay point of view, there would not be a single dog with his single name. But a dog which name would be defined by his name and the moment you named it. So when the person that saw that the dog was dirty at 1550 and decide to clean him at 1620 when he was already clean would come to clean the dog, it would take the dog1550 and not the dog1620. So he would clean a dirty dog and not a clean one.

He would be in another legs of the Trouser of Time.

A couple of paper : the seminal one by Leslie Lamport on vector clocks : http://research.microsoft.com/users/lamport/pubs/time-clocks...

And here is mccarthy paper that Kay talk about : http://www.dtic.mil/dtic/tr/fulltext/u2/785031.pdf


Thank you, I visit HN for this ;)


No problem, i think more people should look at the problems with time. We tend to not deal with it due to only doing single threaded synchronous stuff, but it is not how the future is going to be, nor the world it seems. So better learn now :)


What did he mean when mentioning McCarthy having solved that "change of state" problem?


https://news.ycombinator.com/item?id=13036663

I answered here. I linked the paper by McCarthy at the end.


Don't know his or your context, but McCarthy is the creator of Lisp, so in a way he's one of the pioneers of functional programming (there were of course other mathematicians inventing the foundations). Which is a way to combine programs from small building blocks that are just functions knowing no global state - they always return the same answer given the same arguments.

Contrast this to procedural or object oriented programming where usually the parts know a lot more about the context (state) than they really need to know. This can lead to implicit (and hard to discover) assumptions about the state of the world that are broken later on when some parts are changed, leading to bugs.

The way this works is by combining functions with (higher order) functions. For example, chain a function that gets A and returns B with a function that gets B and returns C. This is done without ever speaking of concrete inputs - reducing unnecessary knowledge should lead to fewer bugs. (Whether that's a more practical approach is another question).


Lisp is built on OOP: all sorts of objects that have state and functions operating on it. For instance, an I/O stream, needed to do the "read" and "print" parts of REPL, is a stateful object.

Lisp is not only where functional programming originated, but also object-based programming leading to OOP. Lisp was the first language which had first-class objects: encapsulated identities operated upon by functions, and having a type.

For instance, an object of CONS type could only be accessed by the "getter" functions CAR and CDR which take a reference to the object, and by the "setters" RPLACA an RPLACD.

Meanwhile, the other higher level programming language in existence, Fortran, had just procedures acting on global variables.


I was interested to notice that Steele and Sussman describe Lisp as "an object-oriented language" on page 2 of this 1979 paper: https://dspace.mit.edu/bitstream/handle/1721.1/5731/AIM-514.....

There are other places in the Lambda papers and whatnot where they talk about objects, but I don't recall seeing that exact phrase. Of course, just what they meant by it is another question.


That's the wrong context :) He was talking about McCarthy's later discussion of what he called "situations", and is similar to temporal logic (which came later). Functional programming doesn't satisfactorily address the state change problem.


So data structures shouldn't be viewed as static, but indexed by time. Fair enough. I still don't understand how this couldn't 100% be achieved via state monads, or at the very least purely functionally.

That said, I appreciate all of the answers here.


I am not Alan Kay {IANAK}.

My take is that from Kay's perspective computing is a mixture of mathematical and biological abstractions [he has degrees in both]. Monads only touch on the mathematical abstractions and ignore biological abstractions. My take is that Kay may be of a mind that absent biological abstractions computing is diminished in its ability to solve important problems.


The thing about biology is that it's had a ~3 bn. year head start. It's also massively contingent; see evolution by natural selection. Which means that it may not actually represent any kind of global optimum[1].

[1] EDIT: Or even a local one if you're not optimizing the same thing that biology is.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: