I wish the "Iverson bracket" (number 0 in this list) had caught on in a bigger way. It can really simplify writing conditional expressions, as opposed to e.g. a case expression.
It's also an exceptionally useful technique in mathematics if you've got a few nested sums with dependend indices. By writing the condition in an Iverson bracket it becomes trivial to change the order of summation or change one of the variables.
I don't know if Knuth popularized it, but it's somewhat common in combinatorics. Similar idea to using indicator functions when doing change of variables in calculus.
I think the most interesting bit of trivia here is the discussion of Bertelsen's Number: 50847478, which is famous as being a published but incorrect count of the primes less than 1e9. 50847534 is the correct answer, but the incorrect number was derived in the 19th century and has persisted in textbooks through the 20th.
In some ways, with computers, this is pretty trivial; what's impressive is that this was computed long before computers were available, and although incorrect, was remarkably close. Computers have spoiled us.
Cool, but I was disappointed to find that it assumes you already know enough APL to understand the extensional definitions or analogies it gives. I was hoping it would be more of a “bootstrapping APL from a simple, explicitly-defined-in-the-text set of operators” kind of thing.
APL is not so much a model of computation as a new notation for semantics which already exist. The real semantic explanation of APL is set theory and traditional mathematics.
Probably the best deep ground-level explanation of APL is Iverson’s paper “Notation as a tool of thought.” [0]
The bootstrapping explanation you describe sounds a lot like what Paul Graham did in “On Lisp” [1] and in a much more complex fashion, Queinnec in “Lisp in Small Pieces” [2], both highly recommended.
I learned APL in the mid 70's in college but never got to use it for much (I do remember writing a table tennis game which was pretty bizarre). But I always remember how amazing it was to learn the "combos" that did so much work in so few symbols. It was like learning to master a video game like Street Fighter. But APL was always a mostly write-only language, unless you used it every day, your code quickly became WTF-is-that.
APL really is a wonderful core notation, even though the full language is rather crufty. I've long been saddened that its promises of parallelism never seemed to work out.
I've never used K, but isn't Dyalog mostly parallel through isolates? Last I checked, basic array operations were not automatically executed in parallel. Did this change?
That seems like the right approach for CPU-targeted code. Has an APL descendant ever been created to target a GPGPU compute-kernel, or even to compile to an FPGA netlist description of a DSP?
Aaron Hsu’s co-dfns is a compiler of a subset of APL, written in APL, which compiles to GPU code.
Do a HN Algolia search for his username “arcfide” to find a lot of discussion, and there’s a couple of YouTube video talks, one of him talking through the codebase on a livestream, another a recording of a conference style tall introducing it to people.
It needs Dyalog APL, but that’s now easily available for non commercial use.
Because HN is not Reddit. The value of HN comes from good and insightful comments. Humor is noise and noise drowns the signal. EDIT: Humor is a problem per se if it accompanies useful comments.