I feel like you could build a semester-long university course out of this single web page, and it would be a really valuable way to spend that semester.
It is inspiring in the traditional sense of the word, not in the Ted Talks sense of the word, as in it inspires me to blow the dust off old ideas I've had and get working on them and to set the quality bar high for myself.
I was about to mention SectorLisp as a somewhat related project when I noticed that it was written by the very same genius, not to mention Blinkenlights, which has provided many hours of delight. Incredible!
Even if I'am far from understand your paper in deep, I like the way you explore new ways in lambda-calculus, mainly I think following John Tromp's works. I'm exploring lambda-calculus with binary numbers of any size using a home-made language, http://lambdaway.free.fr/lambdawalks, in this wiki page, http://lambdaway.free.fr/lambdawalks/?view=oops6, and it's funny.
Your work is inspiring. Thank you.
I was thinking of the same and also that this is just a glimpse of what's possible and possibly just intended this way to raise the interest in those genuinely curious. Unfortunately I don't fully understand from just glancing over the article (sorry, my bad), but I understand the implications. And these are incredibly fascinating to me. I was always thinking of living-code (just like DNA) living through the packet-state.
Anyway I think abstracting away variables+lexer via sed and indexing is hot!
My mind is so rusty, but I can imagine constructing an elegant & expressive language using "category theory + lambda calculus" and oh my god... instead of computing a single result using binary encoding we can compute all results in parallel (array-based) by using a lossless compressor (similar to data-fusion).
A Finite State Entropy Encoder (FSE) with a multi-symbol alphabet (>2) can be used for this purpose. FSE is a very interesting development in ANS encoding (Asymmetric numeral systems). I need to put all these pieces together + self-modifying code. Because I don't have the feeling that this is a dumb idea at all.
Maybe infecting any device like the skynet code isn't that far of a stretch away? Disregarding any malicious intent, this is simply inspiring. Maybe I'm just too much of a nerd though to enjoy this a bit too much.. idk.
Slightly unrelated, but this thread is giving me a serious learning itch. Any book or online class recommendations on lambda calculus, combinators and that whole area of CS? Preferably written for the experienced programmer but very average mathematician.
I enjoyed The Implementation of Functional Programming Languages by Simon Peyton Jones. It's how you would implement a Haskell like language (well, Miranda like, but that's more or less the the closed source precursor to haskell) using lambda calculus. It goes through lambda calculus, type checking, combinators and super combinators, and G-machines.
Also check out the "sequel" https://www.cis.upenn.edu/~bcpierce/attapl/. Pierce wrote the first book whereas this one is a series of articles edited by Pierce. However Advanced Topics covers a few areas that I have been hard pressed to find covered in other places. Row types, dependent types, linear types, etc.
One really interesting development of the Lambda calculus is the De Bruijn index. It does away with variable names altogether. It solves the name collision problem and would be an interesting way to write hygienic macros.
Tromp's binary lambda calculus, which is what Justine is using, is based on de Bruijn indices.
It works, but it's very fiddly. My favourite way of solving the problem of names in the lambda calculus is to use Pitts-Gabbay nominal syntax. Wikipedia has a weak article on them, which I mean someday to fix up: https://en.wikipedia.org/wiki/Nominal_terms_(computer_scienc...
I really like de Bruijn indexes, but whenever alternatives are suggested I always like to mention the paper "I am not a number, I am a free variable", if only for the title ;)
I love this. I love this person's mind. I've seen her (assuming that is the correct pronoun) stuff before, always fascinating.
And then I just discovered that for some unknown reason, this person has already blocked me on Twitter (I can recall exactly zero interactions with them, but I must have followed them already due to things like APE).
Not sure if she sees this but I'm sorry about whatever I said that I don't recall? Twitter username is same as this one here. Would love to see you in my feed again, will promise to keep big mouth shut
They're called "Lambda Diagrams". See the inventor's homepage [1], and some animations [2]. There's actually a long and obscure history of visualizations for lambda calculus. One of the originals is "To Dissect a Mockingbird" [3], which if I recall correctly, was the main thing on the internet in 1996.
You should also check out the rest of her work! It's all amazing, SectorLISP, Blinkenlights and APE (Actually Portable Executable) are the really well known ones, but many of the other projects are things many programmers would have as the centerpiece of their portfolio.
I remember memorizing how to solve (reduce? resolve? expand?) lambda calculus expressions by hand that went over multiple lines for my degree but I never had the faintest clue what it is and why I was learning this and quickly forgot how to do it.
Reduction (in particular "beta-reduction") is essentially 'calling a function with an argument' (just defined in a precise, mathematically-rigorous way). Since lambda calculus only has functions (AKA "lambdas"), "reducing an expression" is another way of saying "running/executing a program".
"Expansion" would be going the other way; e.g. introducing an abstraction layer which, when reduced/executed, would result in the original expression.
Can I just say this whole presentation (like the previous Lisp boot-sector one) is just astonishingly well put-together, on top of the tour-de-force achievement.
The blc.S didn’t build for me using `-oformat:binary` . Changing to `--oformat=binary` it compiled, but then get core dumps. Running Linux x86_64 up to date Ubuntu 20.04.4 .
for cc in cc gcc clang
do echo "## $cc"
$cc -no-pie -static -nostdlib -o blc -Wl,--oformat=binary blc.S || exit 1
{ printf 0010; printf 0101; } | ./blc; echo
done
IIUC, this doesn't necessarily translate to more efficient programming, just because the program size is small. It's elegant. Just wondering about possible applications. Maybe I'm not understanding how currying works.
Assuming you verified that it's this particular 400 byte interpreter,
which is as sandboxed as it gets, then the worst that can happen is hogging all your memory and cpu.
Given SPECTRE and rowhammer, I just don't believe that. This gives attackers full degrees of freedom to reactively search for a crack in the process space. With a normal statically compiled decoder, as an attacker you have a certain degree of freedom, but it is far less (unless one of your formats is a programmable environment, e.g. NGO's iOS exploit).
I, for one, would not be comfortable having content distributed as a blob that runs as a virtual machine that builds the software that decodes the content. Justine has made this practical. I can see the appeal. I just worry that we're trading in our jpgs for exes in a sandbox, and I'm not sure its worth the risk.
I'm flattered you think my hobby project might replace jpeg. Maybe one day we'll figure out how to do discrete cosine transform as a lambda expression. All I'm trying to accomplish here is sharing interesting ideas and inspiring the curiosity of programmers who dare to dream that things can be better than they are. It's not the the responsibility of the educational materials I provide free of charge to folks who love computer science to help you comply with hardening policies aimed at reducing risks relating to malicious actors exploiting the most recently disclosed weaknesses in your hardware. Powerful knowledge has many applications so it always makes me sad when people choose to focus on the negative ones.
If you want something that can provide that level of assurance then I'd suggest looking into Blinkenlights, which can be retooled to abstract a level of memory obfuscation and processor insulation that effectively neutralizes such threats, much stronger than alternatives like docker/gvisor/vms/etc., but at a cost of performance.
You are right, of course. I must have left my whimsy in my other pants. In my defense I imagined that you might, like, over the weekend, produce an 800 byte standard library that that includes trig and a lua interpreter. Blinkenlights looks amazing, and I will check it out.
I wouldn’t say that the OP has made this possible, there is nothing groundbreaking here it’s a cool trick that isn’t widely employed for the exact reasons you mentioned.
Should be trivial to add running off the end into into native code, presumably after decompressing it. Process is: 0. start a decompressor; 1. decompress a decryptor; 2. call that to decrypt a composite payload; 3. run off the end into a better, native decompressor to decompress the real payload, and write it out or; 4. run into that.
I.e. it does not suffice to verify that the header is really a lambda calculus evaluator.