Hacker News new | past | comments | ask | show | jobs | submit login

Agree on the families, but I would pick a different representative for many of the categories.

Algol -> C. Mostly because you can actually do things with C, and yet it remains a fairly small language that's a relatively pure exemplar of the Algol tradition.

Lisp -> Scheme. Also because it's a tiny language that tries to push the fundamentals of the Lisp family (code-as-data, recursion, functional programming, macros) as far as possible.

ML -> Haskell. ML is eager, Haskell is lazy. If you're going to learn about the ML family, you might as well learn the concept of lazyness, which results in a very different style of programming than FP of the Scheme variety (in the Lisp family).

APL -> J. The usage of special symbols is largely irrelevant to the concepts in APL, and it's a barrier to accessibility. You can learn all the important parts of array-oriented programming with J and use actual words to do it.

Self, Forth, and Prolog I would keep as exemplars of their type. I would also add TCL as another ur-language for string-based scripting languages (with Perl, PHP, and SNOBOL as other representatives of the category).




I'm not sure if it's common knowledge that "ur-" means "original".


In academic literature, "ur-" doesn't just mean original, as much as it means "essential essence of" or "fundamental". In contexts where you're talking about the first/original thing, or something that provided characteristics that would eventually become a trend, "proto-" would be more appropriate.

For a famous example, Umberto Eco's essay "The Ur-Fascist" isn't describing what the first or original fascist movement was, but instead exploring what makes Fascism Fascism. This usage seems to be pretty consistent across academia, who engage in the vast majority of cases of slapping foreign prefixes onto English words.


I hadn’t come across the this before, but in Dutch we have the oer- which is spelled out as in poor, so I somehow connected the dots.


Yeah, K doesn't even have proper n-dimensional arrays. Funny enough, the author writes "If you do a lot of numerical work, learn J earlier." which may be a typo, but still…

> You can learn all the important parts of array-oriented programming with J and use actual words to do it.

J doesn't use actual words. It uses symbols just like APL. In fact, APL uses proper words or abbreviations for utilities things that are not part of the core language, whereas J just uses numeric codes combined with more glyphs. However, J is ASCII-only (and uses bi- and even tri-glyphs) where APL uses pleasant and mnemonic Unicode single glyphs, so you don't need to parse which adjacent ASCII symbols form a "word".


You can use words in J and APL by defining them.

avg =: +/%#

&.> is predefined as 'each'.

Same for APL, but the idea is to learn what the symbols mean, so you can keep it concise and think in composing functions and not reading pages of text. Sort of like mathematicians and mathematical symbols.

I program in J and APL, but I have taken to BQN lately. It is the best of both with some additions too.


The more important distinction IMO between ML and Haskell is that Haskell is truly pure.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: