Hacker News new | past | comments | ask | show | jobs | submit login

If (and I grant, this is a big if) you are used to them, the symbolic nature of APLs allows to discuss code fragments (and even entire* algorithms) using inline elements instead of as separate interspersed blocks.

The difference between scanning algol-style and apl-style code is a little like the difference between scanning history books and maths books: on one hand, one must scan the latter much more slowly (symbol by symbol, not phrase by phrase), but on the other hand, there's much less to scan than in the former.

* Edit: as an example, compare the typical sub-10 character expression of Kadane's algorithm in an array language with the sub-10 line expressions one typically finds online.




APL people often point to the definition of "average" as evidence for its economy of expression: +/÷≢

They make the argument "the word 'average' has more symbols than its definition, so why not just use the definition inline as a tacit function?"

There's some elegant beauty in this that I'm sympathetic to. However, I think it's fundamentally flawed for one big reason, which in my opinion is the core of the unreadability of APL (and other array languages that rely on custom symbology):

Humans think in words, not letters. There is never semantic content in individual letters; a word is not defined by its letters. Sometimes a word's definition can be deduced from its syllables ("un-happi-ness") but the number of prefixes/suffixes is miniscule compared with the number of root words.

We naturally chunk concepts and create referential identifiers to the abstractions. The point of an alphabet is to make the identifiers generally pronounceable and distinguishable.

+/÷≢ is not pronounceable as a single word, even among APL experts ("add reduce divide tally" is not a word). We have a word for this concept in English, "average" (and synonyms "avg" and "mean"), in common usage among programmers and laymen alike. Using +/÷≢ to mean "average" would be like defining a function AddReduceDivideTally(x), which in any reasonable codebase would be an obviously bad function name.

The semantics of array languages, like stack languages, already lend themselves to extraordinary expressivity and terseness, even without a compressed symbology. What is wrong with this?

    sum := add reduce
    average := sum divide tally
I mean that is it, the essence of "average"! Anyone who knows both English and Computer Science can look at that and understand it more or less immediately. Compressing this into an esoteric alphabet does nothing for furthering understanding, it only creates a taller Tower of Babel to separate us from our goal of developing and sharing elegant definitions of computation concepts.


But using +/÷≢ to mean average isn't like using AddReduceDivideTally, it's like +/÷≢. With substantial array programming experience I do read this as a single word and pronounce it "average"; why should the fact that each symbol has its own meaning prevent me from processing them as a group? It's purely a benefit in that I can (more slowly) recognize something like +/÷⊢/⍤≢ as a variant of average, in this case to average each row of an array.

My opinion on the overall question, which I've written about at [0], is that it's very widely acknowledged that both symbols and words are useful in programming. This is why languages from PHP to Coq to PL/I all have built-in symbols for arithmetic, and usually a few other things, and built-in and user-defined words. The APL family adds symbols for array operations, and often function manipulation. Perhaps not for everyone, but, well, it's kind of weird to see a proof that a language I use to understand algorithms more deeply couldn't possibly do this!

[0] https://mlochbaum.github.io/BQN/commentary/primitive.html


> Humans think in words, not letters

In some languages, e.g. Chinese, each symbol means more than a single letter would in a latin alphabet. These languages to me are just a bit more like that.


Each kanji in Chinese is a full word. The "letters" are the radicals, which do not contribute to the meaning.


Sure, but...that's my point. In APL those characters aren't letters either, they're words.

I agree with your overall point (which is, I think, DRY) but not this specific criticism.


> Humans think in words, not letters.

That's true. Just a nitpick: it's not about letters, it's about things that make up a word, and many words in natural languages are made up of smaller parts that have meaning.

But people do tend to forget them and just use the resulting word as a unit

Can't think of a good example in English but in Italian the word "alarm" is "allarme" which comes from "alle armi !" (to the weapons!). At some point people used that word to mean you should grab your weapons and then later they just meant it metaphorically. Most people today don't even make the connection between these two words despite it being right in the face.

What would be an equivalent example in English?


> +/÷≢ is not pronounceable as a single word, even among APL experts ("add reduce divide tally" is not a word

I’m not entirely certain most programs will ever be pronounced.

> What is wrong with this? average := add reduce divide tally

Nothing, but but it seems to imply that this: average := +/÷≢

… nicely resolves this issue of wanting to use words — AKA sequences of pronounceable but semantically meaningless glyphs —- as human-friendly (at least for English-fluent humans) mnemonics for programs — AKA sequences of not-muscle-memory-pronounceable but semantically meaningful glyphs), yeah?

Mind you I haven’t looked at the language deeply enough to know you can actually define an alias like that.


> If (and I grant, this is a big if) you are used to them, the symbolic nature of APLs allows to discuss code fragments (and even entire* algorithms) using inline elements instead of as separate interspersed blocks.

So I easily grant you this, though I don't think it's much different in practice than using subroutines.

But even if I grant you that a high level of experience allows rapid scanning, there's still a barrier of translating to and from English (or whatever language you develop in) that seems to be higher due to the symbolic nature of the language. This is also true with certain procedural languages, but there's also been an enormous number of think pieces about how best to encode procedural programs so that they do naturally translate into natural language domains. I'd imagine that not being able to leverage those decades of discussion straightforwardly would be a loss.

Granted, this is also a problem with stuff like Haskell too, but the type system goes a long way to ameliorating this concern by classifying structures very rigidly.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: