I don't get the impression that anyone is going into the "Maxwell's equations" part of the analogy. I didn't understand/think about it either until watching Sussman's "Programming is (should be) fun!"[0], where he explains the dualism and inter-action between `eval' and `apply' is supposed to be analogous to magnetic and electric fields in Maxwell's equations. The well known Ying-Yang picture containing `eval'/`apply' can (and does in his presentation) also be drawn with \vec{B} and \vec{E}.
And to iterate: we need more than dichotomy (maybe three combinators?)
I mean, why are you so sure that the ingredient necessary to break the "generalized (cognitive) prisoners dilemma" is not "an epsilon of payout (up to sign)"? Seems like undue arrogance to act or not act on your certainty?
>We cannot rule out the possibility that this ambivalence
may have been intentional on Philodemus’s part
To show the proper peristophilia (would this be the peristophilandum?):
`K` forgets an argument; `eval` forgets the differences between all the syntaxes that share a semantic denotation; eg, "1*1" and "1" are very different (even their lengths are different) as strings, but they both evaluate to 1. (I believe this before-and-after-evaluation is what drives the utility of intuitionist logic in informatics; it's difficult for pre-eval `not` to be crisp)
`S` grafts an argument into multiple places in an expression tree; `apply` does the same (in a lazy* language it even avoids the gratuitous eval of args); eg "fac `apply` 6" reduces directly to "1 if 6<=1 else 6 * fac `apply` (6-1)", in which the 6 has been grafted in triplicate.
Does that make sense?
* I guess the sage would always write code such that it doesn't matter if evaluation be eager or lazy or even somewhere in between (cf https://news.ycombinator.com/item?id=40947303 ), but I have yet to figure out how to manage that...
EDIT: [pedantry] FI salonkikelpoinen started with LA sella resulting in IT sala, salone going through FR salon by way of DE salonfähig
restating those integrals in braket notation because HN isn't so ∳ friendly: <∂o|x> = <o|dx>
Squiggol and origami programming seems like it ought to lead there, but maybe I've been following Leibniz too much and instead ought to have used some Newtonian astrology?
(that said, the residuation operator in semigroups, and hence quantales, does seem to have a liminal ability to move between bras and kets, at least for certain well-defined substructures: those for which moving between all possible suffixes and all possible prefixes, double time-reversal, is a round trip)
Thanks, I fancy myself an expert pareidoliac! Trying to keep the sarcasm productive, I see:) well at least you seem to enjoy it.
I'm surprised you didn't critique Sussman's (& Felleisen?)'s conflation with E&M further, didn't they try this angle already? Actually, seems like they didn't take it far enough so that video is going into the queue..
If the obstinate shall both delve and spin, who then will..
(Isayevicha blagoslovi!)
Okay, the quantales seem to be another useful generator of examples
(I need to watch the video at least. But, peering -- no delving yet -- at the code for SICM, he seems to be fascinated by abstract cohomology, at least)
Btw, just stumbled onto this (point against anglos):
(I think there's an excellent Maxwell/Kelvin(?) anecdote related to zoetropes as an appendix in one of Körner's books, but it may be some months before I lay hands on my copies again...)
while meditating on K (but not the KKK of the Kataastaasan Kagalangalangang Katipunan):
> (I am guessing their eval is your `apply`)
Almost: their eval is the result of evaluating my `apply`, so "fac `apply` 6" expands to "1 if 6<=1 else 6 * fac `apply` (6-1)" which can then be evaluated to "720"; the evaluation of the application is their `eval`, for which fac:ℕ^ℕ * 6:ℕ -> 720:ℕ.
(I guess my `apply`, being B^A * A -> B^1, is still isomorphic to their eval, so it does depend upon what "is" is)
> And are the SKI uphill or downhill derivatives :)?
With that in mind, `I` is obviously a flat derivative, that just walks along contour lines...
Assuming we're following Lee's injunction to "be like water" (the path of the 10'000 things?), then `K` would be downhill, for it's always easy to throw away/project out/forget the inessential.
`S` would then be uphill, but in a special way: unless we're doing something special, like S(K(SII))(S(S(KS)K)(K(SII))), we want to go "uphill", but towards a pass; we're trying to use the application to build an "activated complex"* which will then spontaneously (via evaluation) ultimately reach a value, downhill from where we started.
... and thinking about how their `eval` is continuous:
Turing's argument for building his machine to work on an infinite tape of squares holding a finite number of symbols: https://news.ycombinator.com/item?id=41048777 , makes sense for human (and machine) computers, but presumably gods (and angels, and therefore demons and devils?) can make infinitely fine distinctions "at a glance", so they could build a machine with a single square and an infinite number of symbols instead.
However, angelic lambda calculus fails to have any models, for there'd be an infinite regress in the cardinality of higher order functions. If my understanding of Scott is correct, it's the continuity of the exponential and its `eval` which keep higher order functions from "blowing up", and so (although they could presumably do everything in O(1) with infinite lookup tables) if they actually wish to compute in Heaven they'd have to do it pretty much like we do. As below, so above...
> It did not last: the Devil howling 'Ho, Let Einstein be,' restored the status quo. —JCS
Especially enjoyed the mention of the Ycombinator, my barbaric nonintuition would have been that nothing so activated could arise from 2 (or just the 1!?! beta-reduction) seemingly trivial constructs, but this punches me in the face like Mike Tyson says (and continuously, in the gut)
(I admit I am not in the pareidolic flow ATM, but promise to take a long look at these passages when I am)
`*` is an involution in C*, however, and in Kleene `*` is idempotent.
Funnily enough, I've written a codegen with an involutive `*`: if one has Dijkstra-style IF...FI and DO...OD constructs in the intermediate language, it's possible to generate reasonable code in such a manner that (merely by patching all the jumps appropriately) one can round trip both IF->DO->IF and DO->IF->DO, with IF->DO corresponding to "star" and DO->IF corresponding to what Conway(?) called the "starth-root".
(Just as LISP got LABEL very early and LABELS relatively late, I think a great deal of harm has been done because data processing, from the card era up through early micros, had harsh pragmatic reasons to analyse A* as 1|AA*, yet many things simplify by multiplying cases in the pursuit of symmetry, and instead analysing A* as 1|A|A*A*)
To productive puns! And as Gr might have said, ignorance is the mother of discovery (and other beasts)
Glancing at Scott-Rabin I couldn't find any mention of powersets but on p7 his count of "between indices" sounds like he was enumerating a power set. Or did you have some other passages in mind?
(he started off trying to formalise denotational semantics with powersets, and only after failing there fell back to dcpo's; recently I was reading something about the one-to-one correspondence between finite posets and finite topologies which implied that for the infinite case he should've expected to wind up with dcpo's in general when the particular nice cases are lattices, but now I'm blanking on what [and where] it was.
Slogan: computer scientists prefer their models topless.)
My barbaric/peasant state of intuition says that K is more like a differential equation (Maxwell's equations) but S is more akin to the "integral form" (Jefimenko's)
Note how the People feel like they understand M but for J they feel at least a bit of wariness..
And thanks for the JCS quote, it does feel relevant to the struggle to even design useful quantum computers
Going back to mulling the Lagrange multipliers now :)
Pace Darwin, the anglo geniuses seem generally trigger happy -- not good for HN idolatry (up to and including the manPG)
(Grassmann vs Hamilton, if you want to visit that general topic)
> the struggle to even design useful quantum computers
My knee-jerk question for the last few decades (which I probably st^Wliberated from MGM?) is "given that we succeed in decoupling the system enough to accomplish some quantum computation, how do we expect to be coupling it enough to feed the problem in and write the results out?"
This code tokenizes and parses (something the Lisp code skips) the binary encoding of a lambda term from an input bitstream and applies that term to the remainder of input.
I think there's good reason why Lisp's `apply` and `eval` functions are the "Maxwell Equations" of CS. Going back to the Lambda Calculus, `apply` is the β-reduction rule, which is the basis for everything, and `eval` is how you cross the data-code boundary. Between the two you get a system that can 1.) perform any computable function, and 2.) easily work with its own structure.
Like Maxwell's original (8? 12?) equations, there are probably some simplifications we could arrive at given more recent notations; in particular `pairlis` instead of zip and the gratuitous sequentiality of `evcon`?
I’m always a little confused when people are blown away by the “the program IS data” revelation. What did they think the program was? Why is this seen as transcendental LISP enlightenment when it’s the main feature of a von Neumann architecture?
If your preferred method of changing program behaviour is directly editing bytes of machine code in unstructured RAM while the program is running, then yes, the fact that it's a Von Neumann architecture does give you that "the program is data" duality.
That's not usually what people mean though.
They usually mean that the tools you have within your user-level application for working on useful data structures directly apply to creating new functions and modifying that application. That's non-obvious, especially if your mental model is something that looks like C.
Edited to add: also worth knowing is that historically the Von Neumann architecture itself was invented only 6 years before Lisp was. The "Maxwell's Equations" would have been written in a context where separating instruction memory and data memory would have been a commonplace assumption.
> If your preferred method of changing program behaviour is directly editing bytes of machine code in unstructured RAM while the program is running, then yes, the fact that it's a Von Neumann architecture does give you that "the program is data" duality.
On some platforms, e.g. the Commodore 64, it wasn't uncommon to write self-modifying code, which is exactly what you describe. The fact that code can be data also seems somewhat clearer if you've ever had a use for structures of function pointers in C, or if you've manually written jump tables or directly-threaded interpreters in assembly.
But, IMO, those aren't as interesting as the "code as data" model in Lisp, because in Lisp, it's the high-level code that's reified into its own untyped data structure, which makes it a lot more flexible and powerful than the literal bytes of machine code that Commodore 64 programmers modified at runtime.
Turns out I'm wrong about dates in my historical edit: it's 16 years, not 6. No idea how quickly von Neumann took over from Harvard in that period, so it's conceivable that assuming a merged memory was already the standard.
The closest I've come to being "blown away" by it was a project where I needed to clone the results of an Excel workbook onto the server, without installing Excel. I started with PHP as my normal language, but it ran into the ground when I had to deal with order of operations, new operators, and cell references.
I'd never really used a Lisp-based language before, but I decided to give Clojure a try, and it was the first time I grokked the value of "the program is the data is the program".
In PHP I had different "things" - operators, functions, scalar variables, class variables - and I needed to think about what was assigned and when. But in Clojure, everything was "data" that I could use to construct bigger pieces of data. Maybe that's obvious to better programmers than me, but my mind was blown.
On a shallow level, is trivial; as in “a program is a file with ascii codes, like any other file, is data”
A little deeper, you can think “an obj file are bytes, even called TEXT section, so the same, a program is data”
But in Lisp the parallel comes at a much higher abstraction level. In the sense that you learn “algorithms AND data structures” like 2 different things, and when they conflate, that is the aha moment.
It is interesting when you start to understand that data can affect flow control exactly as code, that you can generate code on the fly as needed, just as data (which is bot really possible in C- in an easy way, at least) and you even can generate code, which generates code, which gene… that runs at the end.
Let me add another thing: probably you have heard “self modifying code is dark, dangerous, not recommended”. Now, if you really embrace the idea that data is code, code is data, then you start to be very cautious with variables. Any change in a variable, is changing code… I think that is the reason why people in functional languages value so much immutable data.
I think that's a really interesting point. I don't think there are many high level languages where code and data are fundamentally the same kind of thing. I'm aware of only lisp and tcl (funnily there's a thread about tcl on the front page right now). Also people probably don't think about machine code all that much and even if they do they probably don't consider self-modifying code. The only exception to this may be JIT compilation? Anyway, it seems to be a concept that's rare in high level languages so I understand the surprise.
"Code is data" doesn't refer to self-modifying code.
It means that the source code of the program is a data structure, whose processing can be customized in the programming language.
In machine language, you do not have access to customize the assembly language. For that, you work in a different language: the macro language of the assembler.
[0] https://www.youtube.com/watch?v=2MYzvQ1v8Ww, around minute 13