Hacker News new | past | comments | ask | show | jobs | submit login

I did quite a bit of APL programming when I was younger.

When describing APL, people talk about the strange symbols, the mathematics, etc... but I have never seen anyone describe something I only realized after some time: APL makes you approach problems quite differently, when you are familiar with it.

I stopped thinking in 'steps' applied to the individual data points, but rather I solved the problem in my head (and writing the line along the way) by aggregating the data points into larger data objects, and then letting the data objects expand in a many-dimension universe, always larger and larger... and then I simply looked at the resulting mega-thing from a different angle, and started crushing it back along different dimensions until I finally got my answer (and my line completed). The resulting one-liner was very hard to read ... but gave me the correct result.

Inflation, Change of view-point, Big Crush. That is the core of APL.

Yeah I know... sounds crazy. But that was how APL programming felt to me, and I bet I am not alone. No other language I worked with ever triggered in me that kind of mental problem-solving process.




So what I'd like to know is how that "quite different" approach to problems differs from the standard mathematician approach?

I've been playing with J lately. I've also been a longtime numpy user, going back to the days when it was still numarray. Maybe I'm just writing numpy in J, but I find that my approach in both languages is more or less identical: set up a vector, do some matrix operations, maybe some statistical aggregates, write down the answer.

Can you provide an example for which the APL approach is significantly different from what one would normally do? It might help me understand what insights I'm supposed to gain.


It's quite similar to the standard mathematician approach. It's extremely different from the standard (imperative-trained) programmer approach.

(Note your use of "write down the answer": this is a giveaway that you understand it so well that you're not even aware of your understanding, and might therefore find it hard to explain)


Loosely speaking, this is how I program in SQL. Each table is plane floating in a multidimensional space, each relation get pinned from one plane to another, joins are spikey balls, sub queries are recursive non-euclidean spaces, etc. Although I'd say the SQL is more readable than J but the mental visualization you describe is nearly identical.


I agree. "Grokking SQL" (or rather the relational and set concepts behind it) makes it possible to mentally map how you want the query to behave.

Although SQL is way, way more verbose than J/APL it is still extremely readable, even if the query is massive. Untrained SQL users always point to big queries as some sort of code smell when in fact most queries are logically partitioned by virtue of how they work.


I replaced a 5k line system with a 140 line SQL query. So massive SQL can have pretty significant multiplier in main code reduction.


Guy Steele actually mentioned this in some talk or other. The money quote (loosely from memory) was "APL makes you a better Common Lisp programmer. I was doing a matrix-tensor multiplication routine, and I was thinking about the nested loops, and then I realized it was mapcar of mapcar of apply, done." Sadly, I can't find the video.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: