After learning Smalltalk, Oberon and Modula-3, I started a journey into the world of Xerox PARC and their influences into ETHZ and DEC research labs and became amazed how computing could have looked like already by the mid-90's.
To the point using UNIX stopped being fun, I wanted one of those environments, not a PDP-11 replica.
I actually used Smalltalk before Java was announced to the world.
We were using Smalltalk/V and VisualWorks for some university classes.
Luckily our university library had all the Smalltalk canonical books available and I spent quite a few long nights reading them.
All these are "slogans" aimed at lightning up broken minds.
And "we" resonate with them i.e. we really have broken minds.
Taking this talk seriously means getting together, working hard at making computer science a... science. Finding problems worth solving not solving problems we can solve (or worse, that we do not/should not have).
How hard it is to do that with broken, insulting, narrow minded (autistic to some degree) and violent minds (think of high priests of lower cults)?
The fact that these ideas did not take off (or wrongly: Java, "modern" GUI that initially targeted 8yo children, iPad which is a lying Dynabook) makes you wonder: despite the obvious and useful function that computation could satisfy, i.e. infusing the most powerful ideas into young children & advancing these ideas:
What in the world is holding us back?
Can we break loose?
I don't think having a negative spin on having a buggy brain is the constructive approach. Avoiding the pop science is really the key point Alan Kay keeps going back to. Learn from the elders, there is much wisdom in the history of computing.
Having a "negative spin" is precisely what we need.
We mostly do not even understand that we are stuck in a "Pink plane" (see Alan Kay videos for Pink|Blue plane definitions).
Result: stagnation with more or less identical "paradigms" (see ) for more than 50 years (FORTRAN?).
Computing can do much better than Facebook or Google... where is my Dynabook?
 - https://youtu.be/xrIjfIjssLE
How so? It's a solid piece of software which has been around since forever (since before Ctrl-C and Ctrl-V conventions ever existed). We're all programmers, and Emacs is all about being a programmer's editor.
If you want to see an Emacs-user out-hipster every single web-developer who thought he was cool when he used HTML for his presentations instead of Powerpoint/Keynote... Then watch this: https://www.youtube.com/watch?v=TMoPuv-xXMM
It's incredibly contrived, and incredibly nerdy, yet this, to me, encapsulates so much about what Emacs is. And I love it :)
Programmers have the devious hobby of tackling complexity of their own doing. It isn't praiseworthy, or something to be proud about.
Some people also still use Acme which AFAIK does not have syntax highlighting.
In the video (and almost anywhere else you look) you'll note he talks about recursion on the concept of "computer" as a way to scale, because that way the parts are as powerful as the whole.
If you split that concept into the sub-notions of "procedures" and "data", you no longer have that recursive principle, and "functions" in this context are sufficiently equivalent to procedures.
Going the other way, functions don't really scale up, many if not most things in computing are not functions to be computed. One could argue that computers and "computer science" are misnamed, as computation is more the exception than the rule. "Data-around-shufflers" would be more appropriate.
I didn't see the exact monad references, but they look like ways of working around the fact that the "function" primitive is inappropriate. It is really nice, mind you, and has wonderful properties. Kind of like circles as the basis for astronomical orbits: they are "perfect", but the world isn't perfect in the same way, so trying to model the imperfect world with these perfect primitives leads to having to introduce epicycles/monads.
My 2 €¢
Edit: “Functions are nice, but you need to advance the state of the system over time” — https://www.youtube.com/watch?v=fhOHn9TClXY&feature=youtu.be...
In the French language we have "calculateur" which is the direct translation of "computer" but only commonly used in the context of scientific computing, and "ordinateur" which is the common name for computers... The meaning of "ordinateur" pretty close to "data-around-shufflers", from latin ordino (“to order, to organize”) - https://en.wiktionary.org/wiki/ordinateur: "in its application to computing, [ordinateur] was coined by the professor of philology Jacques Perret in a letter dated 16 April 1955, in response to a request from IBM France, who believed the word calculateur was too restrictive in light of the possibilities of these machines (this is a very rare example of the creation of a neologism authenticated by dated letter)"
Alan Kay seems to push more in the direction of data as CRDTs or fully vector clocked, so that time is just an additional way to referenced some data.
It is close to the idea of separating time from space or to combine them in two.
In CS, we tend to consider a static vision of the world, and a function applied to it. When the truth is that the world exist at different time and is wildly different at each of this time. So a piece of data would be defined by its place in memory and the clock time (vector clock) it was in that memory. Closer to what CRDTs does in a way.
I'm not familiar with CRDT, can you compress the idea of how to "remember" a whole history of states with them and why are vector clocks important? The introduction on wikipedia reads like they are useful for (geographically) distributed processing. I can't really relate or contrast this to monads.
That is what Consistency in the CAP theorem hint at.
The whole idea is that if you define things not only by their state but also by the order this state has evolved, it enable you to know what is the most recent one but also to go back in time. It is especially useful if someone come late after you updated the state and you discover that their update should have happened before the most recent change in state.
Forget that idea of data being hidden behind procedural interface.
Another way to put it :
You have a Dog that is dirty at 1600. So you decide to clean him and put him in a bath. He is now clean at 1610. Now you have another person (thread? computer? no idea) that come at 1620 and got the order to clean the Dog at 1550 because someone saw he was dirty. The person do not ask if the dog is dirty or not. He got an order and do it. You now have a dog that is being cleaned again.
With Alan Kay point of view, there would not be a single dog with his single name. But a dog which name would be defined by his name and the moment you named it. So when the person that saw that the dog was dirty at 1550 and decide to clean him at 1620 when he was already clean would come to clean the dog, it would take the dog1550 and not the dog1620. So he would clean a dirty dog and not a clean one.
He would be in another legs of the Trouser of Time.
A couple of paper : the seminal one by Leslie Lamport on vector clocks :
And here is mccarthy paper that Kay talk about :
I answered here. I linked the paper by McCarthy at the end.
Contrast this to procedural or object oriented programming where usually the parts know a lot more about the context (state) than they really need to know. This can lead to implicit (and hard to discover) assumptions about the state of the world that are broken later on when some parts are changed, leading to bugs.
The way this works is by combining functions with (higher order) functions. For example, chain a function that gets A and returns B with a function that gets B and returns C. This is done without ever speaking of concrete inputs - reducing unnecessary knowledge should lead to fewer bugs. (Whether that's a more practical approach is another question).
Lisp is not only where functional programming originated, but also object-based programming leading to OOP. Lisp was the first language which had first-class objects: encapsulated identities operated upon by functions, and having a type.
For instance, an object of CONS type could only be accessed by the "getter" functions CAR and CDR which take a reference to the object, and by the "setters" RPLACA an RPLACD.
Meanwhile, the other higher level programming language in existence, Fortran, had just procedures acting on global variables.
There are other places in the Lambda papers and whatnot where they talk about objects, but I don't recall seeing that exact phrase. Of course, just what they meant by it is another question.
That said, I appreciate all of the answers here.
My take is that from Kay's perspective computing is a mixture of mathematical and biological abstractions [he has degrees in both]. Monads only touch on the mathematical abstractions and ignore biological abstractions. My take is that Kay may be of a mind that absent biological abstractions computing is diminished in its ability to solve important problems.
 EDIT: Or even a local one if you're not optimizing the same thing that biology is.
may you please let me know which book you are referring to here ? thanks !
more like "thundering nerds" ;)
it seems too chaotic though
synonyms: crush, jam, trampling
hmm... "traffic jam" :)
Then he asked which session we were going to next, so we all hit one about concurrency issues. It was jammed and came in late to stand in the back. The presenter bemoaned locking and inconsistency and got to solutions. He started with a slide on immutability and lightweight functional actors in Erlang, having solved it 30 years ago, and pointed out Joe, who got a roomful of cheers and laughs and validation for his work, in person.
It was all very human.
History Bonus - Armstrong chatting with Stallman:
I like Joe Armstrong, and he's brilliant of course, but he interrupted too much in places. But it seemed to be out of enthusiasm, so I can't really be bothered.