To me, the most powerful example is this: "It forced me to think of every such problem as a chain of the primitive list operations, maps, folds, filters and scans. Now I always think in these terms. I "see" the transformation of a container as a simple sequence of these operations. "
That's a higher level of thinking that is faster and yet at the same time more precise than thinking in terms of imperative loops.
Haskell has taught him to think more powerfully.
Even more so than Haskell.
It doesn't have monads and the kind of abstractions that lets you modify control flow semantics. It doesn't even have the facilities to build abstract data types - which makes you work with less-abstract data, and realize that although some abstraction is useful, most of what is practiced today is useless.
APL / J / K promote, at the same time, container abstraction, and concrete down-to-the-metal real work.
Maybe more inspiration than introduction, but the "Game of Life in APL" video is a must see (http://uk.youtube.com/watch?v=a9xAKttWgP4&fmt=18)
The description on YouTube points to a "Game of Life" tutorial at http://tryapl.org. I haven't tried it, but it looks nice.
For a more academic (as in "read about, don't play with") introduction, I found Iverson's "Notation as a Tool of Thought" (http://awards.acm.org/images/awards/140/articles/9147499.pdf) a good introduction.
At least, I believe that is true. Someone stronger in CS theory could come along and verify/correct that statement.
This paper , for example, appears to argue that FOLD (foldr in particular) is universal, meaning that there is only one way to move through a list and resolve it to a single value. In other words, you may have some other algorithm that you think is not a FOLD, but if it moves through a list and resolves it to one value, then your algorithm is FOLD. (I say the paper "appears" to claim that, because in all honesty there's about 10-15% of the paper that I didn't fully grok).
I suspect that applies to map/filter/etc. After all, if you look at their definitions, they contain nothing but the essence of the operation (map/filter/fold/etc) itself.
So, map/filter/fold/etc are THE operations on lists.
You mean these functions - http://karma-engineering.com/lab/wiki/Tutorial5
Why Haskell, then?)
We can instead give tree a proper type, give operations over the type proper signatures which help to explain what the operations do, and get verification from the compiler at the same time that we aren't mixing things up.
It's hard to beat
data Tree a = Node a [Tree a]
separation of effects from pure functions - this I cannot understand. Aren't these function pure?
And another part of "properness" - each node of a tree is a tree itself. Sometimes less types is more.)
Well, I'm old-fashioned, but I think that being able to put any kind of data into a list is the strength, not weakness of a language, and that user-defined ADTs and compound data-structures needs no explicit typing, they are nothing but conventions.
Types provide one important thing: static guarantees. Compile-time type checking can only work with strong, expressive data types, and ADTs provide a very good way of enforcing that. (Together with Haskell's `newtype` keyword, which really is nothing more than a convention, if you will, a different way of expressing a type synonym, but with the static guarantee that when you make up a `newtype Name = Name Text`, you cannot use `Name` wherever you can use `Text`, but you can access the `Text` `Name` is a wrapper for easily.
This makes it much easier for your intent to be expressed directly in the code, namely in the data types. Just compare:
makePerson :: Name -> Age -> Gender -> Address -> Person
makePerson' :: Text -> Int -> Text -> Text -> Person