Hacker News new | past | comments | ask | show | jobs | submit login
A Personal History of APL (1982) (ed-thelen.org)
122 points by lupin_sansei on Dec 3, 2023 | hide | past | favorite | 47 comments



The easiest way for non-APL programmers to begin to understand APL is to see what it hasn't got: boilerplate and loops. If you discard those from any other programming text you're left with the essence of what the code does. APL is only explicit about that essence, the rest is inferred by the interpreter. That's why it is so compact and that's why if you're used to step-by-step instructions with a ton of air in between you will initially find it hard to read. But that gets easier over time and before you know it you'll think of other languages as hopelessly verbose.


The inventor of APL, Ken Iverson, also made the language J which is very much influenced by APL but does not use the special characters of APL. Think of it as ASCII-APL. It is a very nice language to program in and they have a decent user community. It is open source and is incredibly fast and looks at the first glance as cryptic as APL.

https://www.jsoftware.com


To be clear, Unicode fully supports APL, even if ASCII does not:

https://en.m.wikipedia.org/wiki/APL_syntax_and_symbols#Monad...


I used APL professionally for about ten years. One of the most interesting applications I worked on was the analysis of DNA sequencing data during the Human Genome project.


The language is interesting, but the management ideas in this are very strange.

Example: tin flippers. Items are being checked manually and miscounted. The numbers are inherently inaccurate. And he makes the point that this applies in many other situations - some where the inputs are mechanical, so "Use a machine" isn't a solution. ("Use a better machine" might be. But how would you know without being able to monitor output quality?)

How is a programming language supposed to fix any of this? Finding gross inconsistenicies shouldn't be too hard, but automating broken processes algorithmically means you have automated broken processes.

This is not at all the same problem as formally specifying software in ways that make it more reliable.

The other problem is that it sees software as strictly sequential in->stuff happens->out.

This is basically batch mode, not real time. As soon as you need real time I/O everything gets a lot more complicated. I don't see anything in APL that solves that.

And that's even more true of business processes. To add: it's not that analytics can't be useful. But APL won't somehow magically guarantee that your analytics are any better or more useful than output from some other language or system.


Real time processes are indeed critical in some contexts. And, arguably, we should be putting more effort into some of those contexts.

For the APL family, that's often a call for code which works with external interfaces. In many potential APL environments a handful of well designed custom primitives would get you where you need to go.

But no programming language is adequate for all roles, and that includes APL. (It can still be useful for modeling / prototyping for real time work, but that's not always necessary.)


Everything involving a user is a real time thing. But that's now how the software industry views it, somewhere along the line the user became a peripheral rather than the other way around.


"Every system is fault-tolerant; unfortunately in most of them the component tolerating the faults is the end user."


It's at least reassuring to learn from the 1952-1954 section that the phenomenon of doing things without having any clue about them easily predates the microprocessor.

(btw I second the recommendation of the 1962 A Programming Language)

Edit:

> The third page had an illustration that, in a few short lines, described George Dantzig's simplex algorithm simply and precisely.

> That was the overwhelming, crucial experience.

This is the crucial feature of the APL family for me: they permit algorithm discussions via inline, not block, content.

eg: https://news.ycombinator.com/item?id=28970345

Edit 2: Upon reflection, it's probably also the Achilles' Heel of the APL family: most people vastly prefer easy and DWIMy over simple and precise.

Edit 3: found the paper containing the simplex algorithm: https://www.jsoftware.com/papers/DFSP.htm Note that 1960 was before "structured programming" so we now have (with the benefit of hindsight) much better ways to express the control flow of this calculation.


> most people vastly prefer easy and DWIM over simple and precise

Most people also vastly prefer not having to work on difficult problems. Iverson was clearly trying to provide a notation that would help the (admittedly smaller) group of people who find solving difficult problems interesting and valuable.


In the beginning of the computer age all problems were difficult. If not because of the problem itself then because of the limitations of the machines. It's amazing they did as much as they did with so little, your average Arduino makes the best hardware from those days look puny.


That is true, but OTOH I maintain that - in absolute terms - there are more difficult problems that people are struggling to solve using computers today than there were in 1970.


If you're the M Kromberg, any chance you could ask J Scholes to whip up a dfns version of Iverson's 1960 Simplex algo for us?


John was working on implementing negative right arguments to ⎕DL (the delay function in all APL implementations). As soon as that is sorted, we expect him to reappear.

John once did a live demo where a negative number (like ¯17.01) suddenly appeared in his APL session log. He looked puzzled for a moment, continued with his demo script for a little while, and arrived at the statement ⎕DL ¯17 (delay negative 17 seconds, and return the actual amount of time waited). He exclaimed "Oh, THAT's what that was!" and continued with his demo.

John had a fantastic sense of humour, if you have not seen them I recommend taking a little time off to watch some of the recordings at https://johnscholes.rip/video/


Sadly, John Scholes passed away in 2019. https://aplwiki.com/wiki/John_Scholes


Sorry to hear that, but very glad to have run across http://dfns.dyalog.com/downloads/howcomp.pdf as a result...


I tried J (Iverson's successor to APL) in earnest but struggled to understand what I was supposed to do when I needed data structures other than arrays. E.g. tries, trees, hash tables and so on. The majority of the out-of-the-box vocabulary is centered on arrays, so it seemed that either you convert your problem to some kind of an array representation, or go somewhere else.


i've also tried learning J and can't really say i've managed it yet, but i think the idea is that everything is an array. in J an array can contain a box, and a box might contain an array, so you have nested arrays, (different from multidimensional arrays.)

lisp baffled me for a while because, in my mind, a list is different from a tree, but in lisp a list element might be itseld a list, and so a lisp list might be a tree. similar situation with J arrays.

i'm not sure how APL does it. i /think/ it uses a flat array model, but i'm not sure what that means exactly, or if it's even applicable to modern APL, let alone what it might imply about bending APL arrays into other data structures.


J has primitives (like I. and i. and e. and the primitives which set up for their use) which serve some of those roles.

But, yes, approaches which use some kind of array representation are highly favored by the language (and by other languages, to varying degrees, once you understand the patterns and issues).


We had APL on our Prime minicomputer at Swarthmore when I started there in 1978.


My missive a few years ago about APL at Swarthmore in the mid-1970s:

https://news.ycombinator.com/item?id=27460887

> In the mid-seventies at Swarthmore College, we were mired in punched card Fortran programming on a single IBM 1130. ... Late one Saturday night, I made a misguided visit to the computer center while high, smelled sweat and fear, and spun to leave. Too late, a woman's voice: "Dave! I told Professor Pryor he needed you!" ... So busted! Then I heard this voice “See these square brackets? See where you initialize this index?” He was spectacularly grateful.

> One cannot overstate the rend in the universe that an APL terminal presented, catapulting me decades into the future.


Did you do anything with it?


Can APL can be used to implement Machine Learning algorithms? It seems like a good fit.


Yes, see e.g https://apl.wiki/nn


Yes, but no one's made a reverse-mode autodiff system for it yet, so all of the linked examples have hand-written derivatives.



J influenced the creation of Pandas by Wes McKinney.

Ah, I sometimes have the wild thought that if he had just stuck with J or APL, Python may not be where it is today in ML. Python has become the Borg.


Someone in the HN comments called out that J is the proof that Iverson's "Notation as a Tool of Thought" is a failed idea. Even Iverson himself, the inventor of "Iverson Notation" (proto-APL) as a better math notation, the inventor of APL, who used it to design the IBM 360's processor and then turned it into a programming language, abandoned it for ugly ASCII scribble (J) because that was more convenient.

I guess the ideas, not the notation, turned out to be the important part after all.

"I have only made this letter longer because I have not had the time to make it shorter." - It's Advent of Code[1] (AoC) season again; take a moment to look at the answers people put in the big Reddit answers threads e.g.[2]. The comments are all a beautiful/awful zoo of languages - wildly varying in programmer experience level, familiarity with the language, choice of approach and algorithm, runtime, focus on a tidy solution or a quick answer. I wish we could see how much time people put into their solutions - I suspect this famous quote applies and it takes longer whether you polish a plain language or write directly in a terse language. I think the hurried or inexperienced answers tend to be long and garbled, the experienced and polished answers tend to be clean and clear. The racing leaderboard entries that I've seen - people seeing the problem and getting an answer ASAP - tend to be Python, and tend to be short and clear. They're almost never (APL, J, K, Q, R, uiua, Haskell). Does that say anything of value about the ability to quickly and clearly express ideas in a language?

I feel like there's enough of these answers now after years of AoC for someone to analyse and compare the languages. My gut feeling is that non-golfed Python still comes out the most easily writable and easily readable, the nicest balance between density and verbosity.

[1] https://adventofcode.com/ - daily puzzles through December, solved with code using whatever language you like, however you like, the site only checks your answer not your working.

[2] https://old.reddit.com/r/adventofcode/comments/1883ibu/2023_...


https://blog.vero.site/post/noulith seems to be familiar with both APL and J, and draw some inspiration from them…


Lots of people do Python so it makes sense that a lot of leaderboard solutions are in python! Hardly rigorous (n=1) but I got on the leaderboard for day 2 using K. Other people have before in array languages too.


> "Lots of people do Python so it makes sense that a lot of leaderboard solutions are in python!"

Does it? If you saw a bike racer riding an old steel commuter bike with mudguards and pannier rack, because lots of people ride those, wouldn't that be weird? Bike racers ride lightweight carbon fibre bikes and they do so because other bike racers ride them and they could not be competitive with something much heavier and less aerodynamic. They use every advantage they can get.

If there was a programming language which was unusually good for rapid data crunching, over several years of AoC wouldn't you expect it to become the dominant language on the leaderboard?


AOC is pretty low stakes, so it doesn't really make sense to put too much effort into it. For the average person they can get a lot faster by automating all the input downloading/submitting/etc and writing a huge number of utilities in their language of choice than by learning an array language.

Each year probably only a single digit number of people using an array language actively try for leaderboard spots, so if anything it's impressive any of them manage compared to the 100x or 1000x number of python/etc programmers also competing.


I find it sad that APL nowadays means Dyalog. They appear to do amazing work. Unfortunately, the license isn't open. Why would I invest time learning it only to be locked in? I wonder how much licensing plays in APL not being more popular and used in the present day. There are open variants, like GNU APL2 and J. However, GNU APL2 has none of the advancements made in the past 40 years and J is J, not APL. The notation is simply different.


Anyone is of course free to fork GNU APL, or any other APLish FOSS language, and modify it to include Dyalog APL's features. However, Dyalog can only do so much amazing work because users that make significant amounts of money with Dyalog APL pay enough to support 25 full-time employees. Note that as long as you make less than £5000 per year from its usage, you can use Dyalog APL without limitations.


There's also April APL: https://github.com/phantomics/april

Also the array language family seems to be stronger than ever with foss: ngn/k, BQN, uiua, and of course J but as you mentioned they're all different languages.


I cant recommend enough this april video of Andrew Sengul:

https://www.youtube.com/watch?v=AUEIgfj9koc


Thank you. I had seen April previously although it looks like it's had a lot of updates since then. I appreciate you bringing it back to my attention :)

It looks like it can load pure APL files.

All the examples I see use the Lisp repl and APL is called within strings. Does it provide an APL repl, too?


April is amazing. I have used J for over a decade, and I am currently being swept away by uiua, but my old Lispy love teamed up with APL in April is a knockout combo: Do the generic stuff with Lisp and the numerical magic with APL.


I had not heard of uiua before, but I'm familiar with both APL and stack languages, so it should fun to explore it its capabilities.


No mention of 'R'?


genuinely curious -- do you not feel the burden of being locked in with cpython, gcc/clang, rustc, ...?

or do you maintain forks of those?


I don't feel locked in by the license of those projects. They have permissive licenses. Anyone can maintain them.

I don't need to personally maintain a fork in order for the possibility to maintain one to exist.


what is your realistic estimate for the capability of any non-corporate group to pick up maintenance of one of the open-source projects?

proprietary programming systems typically come with escrow agreements, providing access to the source code to customer e.g. if the vendor stops supporting the product. that wouldn't be materially different from another large corporate picking up e.g. llvm/clang if apple/google stop being interested.


At what time, for which project, over which period? If we're talking circa 1982 for APL between then and now, the number would be greater. From last week to this moment, less. Yet, small odds are always better than no odds, in terms of a thing happening. It really only takes one to get started.

I appreciate you mentioning escrow agreements. That's a valid point. However, "typically" is not "guaranteed". It looks like MicroAPL, a company so involved with APL that they put it in their company name, no longer sells or develops their APLX product (even though the company appears to still exist). They stopped selling and developing it in 2011. Was it guaranteed that Dyalog would take the project over and host the binaries and documentation in 2016? I don't think so. Good on Dyalog for doing it. The fact that nothing was guaranteed is what makes their actions meritable.

Yet, as far as I can tell, it's only the binaries that are hosted. It appears to receive no further development, no support for new architectures, etc. The thing which is materially different is that the source code is not available, there is no guarantee that it ever will be, and no one, outside of maybe one of two people, have legal authority to use the source. Your point about escrow stands and is valid. It's also fundamentally different than a project having an open license.

PS: sorry, I didn't directly answer your question. You asked for an estimate of a non-company continuing development of one of the current freely licensed projects you listed. For something like LLVM, I think pretty high. A lot of people are involved in it, a lot of people prefer it to GCC, and it has been worked on by many people for many years. But knows. Maybe the companies pull out support and it dies. Maybe not.

That's also a separate question: what's the chance of an open project getting continued support. My lament is that many APLs aren't open to begin with and are not guaranteed the chance to get support.


what do you mean by "locked in"? They all support external libraries for extra functionality and this extension mechanisms are very well thought over. The syntax can't be easily extended which I believe could be a good thing, because it encourages using common idioms instead of ad-hoc, understandable only by its author inventions. Not to mention that some form of syntax extension is present in all of your examples.


> They all support external libraries for extra functionality

this is, of course, also true for dyalog apl, just the same as any practical programming system, be it open-source or not.


GPU programming looks like the modern incarnation of APL.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: