Hacker News new | past | comments | ask | show | jobs | submit login
Ruins of forgotten empires: APL languages (scottlocklin.wordpress.com)
135 points by erehweb on July 28, 2013 | hide | past | web | favorite | 172 comments



The most common "perform operation on bulk data simultaneously" language in wide use today is SQL.

Joe Celko coined a phrase for this: think in sets. Not in loops, not in linear instructions -- but in sets, being transformed into other sets. It's a very powerful way of thinking. Most of the time, you don't need ordered evaluation and so loops are imposing unnecessary guarantees on you.

On the more general problem of our collective historical ignorance ... yes. That is us.

Well not entirely. That decades of the relevant literature is locked up in Fort ACM and IEEE Castle has a lot to do with it.


Shortly after Aaron Schwartz's death, there was a lot of resentment for the more traditional information-distributors, such as Elsevier and other big publishing companies.

But you make a great point about ACM and IEEE. As a student (and now an engineer in industry) in the CS and applied math fields, I can't count how many times during research where I see a useful article behind an IEEE/ACM paywall.

Are you aware of any efforts/campaigns to demand more open access from these two behemoths, who pretend to be on our side?


Maybe I am old school and blunt but IMHO:

>As a student (and now an engineer in industry) in the CS and applied math fields, I can't count how many times during research where I see a useful article behind an IEEE/ACM paywall.

I have never heard of a college that doesn't pay for this access for their students...

>Are you aware of any efforts/campaigns to demand more open access from these two behemoths, who pretend to be on our side?

If you are a computer science related professional you should be an ACM or IEEE member and thus have access to their digital libraries. If you want to have access to guild resources, join the guild.


> I have never heard of a college that doesn't pay for this access for their students...

The vast majority of educational institutions are not in the US and do not have anywhere near the funding to pay rent to the ACM/IEEE. Every dollar they spend on something like pay-wall access would be better spent investing in their students.

> If you want to have access to guild resources, join the guild.

Guilds are cartels. That's why we don't have them anymore except in the more neutered form of trade unions.

But also, the content ACM et all charge for is not their original contribution. Other people write the papers and give them to the ACM for free. Other people review the papers for the ACM for free. The ACM takes in about 60mm each year but only spends 40mm on it's various activities.

At one time something like the ACM was necessary purely for pragmatic reasons. But now we have the Web, and there is no reason someone should be earning a 20mm profit for artificially restricting the flow of knowledge in the discipline. By comparison the total budget of arXive is around 400k. The ACM could make the digital library free not make a dent in funding for any of the actually useful stuff they do.

It's also unconscionable that the ACM attempts to police 3rd parties linking to copies of papers published on author's homepages. This issue specifically is why I canceled my membership (after writing a letter of complaint that received the most self-aggrandizing response you can imagine)

I'm more charitable to the IEEE as they do a lot of standards work, which does benefit from having a centralized bureaucratic organization.


Please consider:

• The self-taught of all ages who may have never set foot in a university, or have begun informally studying an unrelated discipline on their own after leaving formal education. That they exist and can make use of academic research may sound far-fetched for people in some disciplines, but it's not at all unheard of, at the very least, for CS, statistics, and mathematics.

• Precocious youth

• As jasonwatkinspdx mentioned, students that do not live in the US

• That restricting access to these materials takes a hidden toll on humanity; unless they are made freely available to all, we will never know how others might have taken advantage of them.


I am a member of both the IEEE and ACM.

First, you have to be a member of both, as the literature is divided between them.

Second, library access isn't included in the membership. It's an extra.

Third, the IEEE in particular partition the hell out of their database and you often wind up paying anyhow ("oh sorry, that's not in the IEEE-CS library, now f-you-pay-me").


I presume you mean Aaron Swartz?


The BOOM guys have also shown that thinking in terms of unordered collections translates much better to distributed systems, where guaranteeing order is expensive:

http://boom.cs.berkeley.edu/


BOOM project came up with a very impressive ideas. Their half page long paxos implementation in Overlog blew my mind. I don't think they new Bloom language is as expressive, however.


I'm going to read about BOOM later, but can you give an executive summary about why that is?

People who subscribe to the Relational Database dogma often claim that sets are superior to vectors, and yet despite asking a lot of them I never received an example -- whereas I have numerous real life examples in which vectors are superior.


The main thing is that order isn't always necessary, but maintaining order isn't free.

So if it isn't necessary, why pay the cost?

Moreover, the order you want will change from case to case. Today I care about the alphabetical order of names, tomorrow about the order of dates, next week I am most interested in the top 100 transactions by value.

Imposing a single order imposes an access pattern which may or may not hold true. With sets I delay that binding until the last possible moment, when I understand it best of all.


> The main thing is that order isn't always necessary, but maintaining order isn't free.

> So if it isn't necessary, why pay the cost?

But if you use the APL/K approach, you don't pay the cost. There is in order to every result, but you can only rely on it if you know it (or force it, by some sort).

Specifically, my question is this:

The order of results of "select ... where" without an order-by clause in SQL is never guaranteed. The order of a ksql/q result is sometimes guaranteed, and can be used. Where's the cost of just giving every result row a running index, which some operations can rely on / change, and most ignore?


That the order isn't guaranteed is a feature, not a bug. It frees the implementation to collect records in the most efficient fashion and from having to maintain ordering.

> Where's the cost of just giving every result row a running index, which some operations can rely on / change, and most ignore?

The cost is that you need to keep an index updated. A trite observation, but there it is. Sometimes that cost is worth paying, sometimes it isn't. SQL doesn't forcefully extract that cost, it lets the database programmer or administrator decide, based on local conditions, what is best.


> That the order isn't guaranteed is a feature, not a bug. It frees the implementation to collect records in the most efficient fashion and from having to maintain ordering.

Yes, I understand the theoretical argument. I'm looking for real world examples where that actually helps anything, or theoretical examples where it is superior (in the sense that "there is no way to assign order which would not move the problem to a higher complexity class").

> The cost is that you need to keep an index updated. A trite observation, but there it is. Sometimes that cost is worth paying, sometimes it isn't. SQL doesn't forcefully extract that cost, it lets the database programmer or administrator decide, based on local conditions, what is best.

My point is that ksql does the same -- except that in most cases, the cost is 0, so you get the order "for free"; when the cost is not zero, ksql doesn't make guarantees, and you'll need to ask for ordering, just like in sql. Every sql engine already assigns order in one way or another (ROWID, etc.), except it is unpredictable, non-standardized, and doesn't provide most of the benefits even though it is there.

And the "no order" default often does have a cost; e.g., if you are trying to store e.g. tax brackets in your database and find out which tax bracket a salary falls into:

Either you store both lower and upper bound of bracket in a record, which means you can't normalize (and it is hard to express the integrity requirement of "no gaps" as a constraint) - or you do normalize, and then every query is a self join or correlated query that many optimizers won't handle properly. Whereas in ksql, you store it normalized (just start-of-bracket), order by start-of-bracket, and then get the bin that corresponds to your question.


I'm wondering if we're talking about different things here. You're talking about vectors, which to me suggests an ordered collection of primitives. I'm talking about sets of relations.

The main difference is that a relation is not a primitive. It is a tuple of primitives.

The point being that sometimes you care about those relations being ordered by date, sometimes by amount, sometimes by ID and so on and so forth. The database imposes no order on you and so you can impose one at will.

The database equivalent of maintaining order for every single-column selection is to have per-column indices.

Try that in production on a busy OLTP system and let me know how you go.

Where array/vector languages and SQL see eye-to-eye is in basically abolishing the loop and having a logical model where operations are applied simultaneously to whole collections.

From talking to you the difference is when order is imposed. In cases of time series analysis, where there are single numbers and a natural order, I see the value of array/vector languages (or one of the various proprietary temporal-SQL databases). But most problems of interest aren't vectors with a single natural order.

Incidentally, I don't see why you think the tax-bracket thing is such a killer case. Oh noes, a small join!


> The database equivalent of maintaining order for every single-column selection is to have per-column indices.

I guess that's where the misunderstanding comes. In K/APL, every column has an implicit "natural index" - the first item has index 0, the second item has index 1, etc. The thing that is always guaranteed is that, in a result set, the natural indices of the different columns of the same tuple correspond.

Some operations guarantee a particular natural index order, but many do not. An additional index on the values that allows you to find an exact value (e.g. hash) or range of values (e.g. a b-tree) may or may not exist, just like in a relational database.

This "natural implicit index" does not take memory or anything to maintain, but it is useful.

> Try that in production on a busy OLTP system and let me know how you go.

ksql runs my queries approximately 100 times faster than optimized MySQL (using MyISAM - faster, but no acid guarantees; ksql is acid).

> Incidentally, I don't see why you think the tax-bracket thing is such a killer case. Oh noes, a small join!

That was just for known terminology, not knowing where you come from. I have tables with thousands of such brackets. And while I haven't tried Oracle or MS-SQL for them since 2006 or so, it was incredibly easy to confuse their optimizer into doing a complete cross product on those tables and filter it - which makes it takes a second per query instead of milliseconds when the query is optimized properly (or microseconds using K).

This is known as an "as-of" join in time-series queries, and any database without explicit support for it usually takes thousands of times more resources than needed for such a join.


The other day i thought i had found a textbook chapter on stream algo's/online learning, but Chapter 17 has yet to be written

http://ciml.info/


Array programming languages predate relational or even most functional languages (sans Lisp). It seems like relational and functional languages were heavily inspired by array languages, if you go back and read some of the old papers.


I think that's a reasonable conclusion to draw. I guess the insight of array languages was applying over collections, and the relational's insight is that you're performing operations on sets (in a mathematical sense).


It definitely is a powerful way of thinking. And yet, implementing transformations used for event sequence/path/time series/complex nested data structure analysis in SQL is like using tweezers for the job. Set thinking is powerful, SQL has limitations.


Yes, SQL's biggest weakness is time and temporality. That's been an ongoing problem since the beginning and it leads to a lot of stress and trouble.

I'm against "complex nested data structures", though. In my experience they are abused and there is always a normalised alternative.


Agreed on the nested data structures. Sometimes you have to work with them though. I finished a project where I had to integrate/analyse data from a factory system originally developed in in the 60s (cobol). Even normalizing 'complicated nested data structures' != fun.


I have had a huge amount of fun lately going through the J labs. Start at jsoftware.com to download the interpreter, then the labs are all built into the interpreter. You can also run it on iOS. The terse code actually means you can type in real programs even on an iPod keyboard.

When I taught people to play frisbee, I used to throw with my off-hand when teaching, because it made me pay attention to the fundamentals I was trying to teach. Learning J had some of that same feel. I felt like a newbie, but when I saw the kinds of audacious programs you could write, I was hooked. Highly recommended for a bit of fun in your spare time.


I went to a small university with only 8 CS professors. One of them was the author of J's array parallelization and he taught our Intro to Functional Languages course.

That class was fourth in the undergrad sequence, coming after C, C++ and Java so it was quite a shock to, well, everyone. He took us from finding the average of a list all the way through to implementing a CMS in J (our department web server still runs on J because he's since retired and no one else knows how to maintain it).

This was easily the hardest class in the entire department. Students who sailed through everything else would struggle to pass. It's a fascinating language but not an easy one to grasp.


Same here, but I found it extremely hard to find real world use cases for the language. It's fun as hell, but for me it's a puzzle, not a programming language.

Anyways, making cellular automata is fun as hell! :P


This happens with proprietary languages. How many people can afford $100k/CPU, however good it is? How many can play around with a language like that at home? Who's going to write infrastructure around such a thing?

Hadoop is pretty crap - I know, I used to use it professionally. You know what else is crap? Unix. People are going on about cgroups as an exciting new feature, when better implementations were around what, thirty years ago in Real OSes.

It doesn't matter. Commodity wins, and sooner or later if there are good ideas to be had we'll find them again.


How APL is proprietary? Anybody can implement it, and there are opensource implementations available. The 100k/CPU price the author referred to was about the high performance database integrated with a vector processing language engine.

It is not about commodity. cgroups-like functionality was available in many opensource OSes for years (e.g. FreeBSD jails), but was easily dismissed by majority as worthless. This is exactly what the author was talking about referring to Lady Gaga -- the modern programming community is driven by fashion regardless of how good or bad the current fashionable technology is. I do not find it surprising however, as it is a direct result of the order of magnitude growth of the programmer's population, which is behaviorally much more close to the general population currently that it used to be.


Actually the Lady Gaga analogy is a little bit deeper. Lady Gaga is very good musician, but she intentionally pursues money/fame/attention/whatever instead of originality/art/excellence.


So true ... I don't really appreciate her current music (I mean like Brahms or Mozart), but she was so bluesy in the NYU talent show (http://www.youtube.com/watch?v=NM51qOpwcIM). I could listen to music like that all day.


>Lady Gaga is very good musician

Compared to what? Beethoven? Charles Mingus? That she can produce some shitty pop tune to be forgotten in a few months after release doesn't indicate much.


She is classically trained and went to a good school. That sets her apart from the overwhelming majority of contemporary pop acts, whose training is mostly from karaoke pub competitions. She's no Beethoven, but she might have been; that's the point.


> contemporary pop acts, whose training is mostly from karaoke pub competitions.

Source?


What open source implementations are those? I spent some time recently learning APL, and the only one I could find was NARS2000 which is Windows-only, very buggy and has practically no documentation available.

If you know of any others, please let me know. I certainly haven't found anything.


There's A+, from Morgan Stanley. It is actually a variant of APL, but it looks mostly like it (unlike J, it retains APL's hieroglyph characters, etc.) Licensed under GPL.


The A+ website is http://www.aplusdev.org/ for those interested.


FWIIW, I couldn't get this to build.


It is also in the Fedora repositories (yum install aplus-fsf). I would imagine that it would be in Debian/Ubuntu too, but I don't have a current install to check. If all else fails, you can probably grab the source RPM to see how Fedora/Redhat got it to build.


https://github.com/kevinlawler/kona is an open source implementation of K3.2

(Though note that the most recent version of K, K4, is a much cleaner and simpler language - I'm not aware of an open source version of that)


There's OpenAPL, but I've no experience with it (yet).


s/Unix/Linux/

When you say "Unix" did you mean to say "Linux"?

"Real OSes"

Can you name an example or two? I'm aware of some old non-UNIX OSes that are still around but which over time have become largely ignored. I'm curious the "Real OSes" you were thinking of.

"How many can play around with a language like that at home?"

As many as have the desire and time to do so. Learners are limited to 32bit and no long running jobs in excess of 1 hour, but this does not stop anyone from learning the language. There's also an open source alternative of the intepreter based on an earlier version of the language.


So what isn't crap?


Stuff made in industries more mature than software. Bridges were probably all crap for the first fifty years of building them too.


I am a bit disappointed by the low ratio of technical content to rambling. And much of the technical content seems to set apart APL from C, but not from many modern languages.

For examples languages like Fortress contain primitives for parellelized loops, and most modern languages also contain enough functional programming that you rarely have to write for-loops.

Function composition and interactivity aren't a unique feature of APL languages either.

There are some other things that are important for programming today: Maintainability, richness of available libraries/modules, packaging, versioning and so on.

Where would I even look for existing APL libraries? My searches for a HTTP library for APL (for example) have turned up nothing, while searching for a HTTP library for several other languages that came to my mind brought up useful results.


There is HTTP library in J (APL "spiritual successor") standard library. There is one for APL for sure - it's just less popular and so the google results are less helpful.

Unique features of J (I'm writing about J because I know it, while I don't know APL) are that all the basic verbs (functions) can operate on arrays as easily as on atoms. There is no special syntax or feature for adding two matrices, for example, standard `+` verb does it naturally.

The second unique feature is a tacit - point-free - style by default. This means that function composition is the default, not function application. You can define new words by just slapping old ones together, without worrying about threading params through them.

The third unique feature (and probably unique to J, previous are found in APL too) are complex rules of composition (hooks and forks) as well as words that modify functions. You can express very complex things in very little space thanks to this.

J is very interesting, unique language which are very well suited to one thing - very fast and efficient array transformations - and completely useless for other things. I hear it's used in a financial and banking sectors, for rapidly prototyping and sometimes deploying statistical and mathematical algorithms. If you have a good enough use case, then learning J will pay off. For anything less than full time commitment, I think that R od Python+Pandas is better. Still, I'm very happy I tried to learn J, as it's completely unlike anything else and it helped me greatly to understand numpy/pandas for my main job.


He'd rather write in Beethoven languages than Lady Gaga languages. Which is fine, but there's something to be said for "takes 3 minutes to perform with one voice and one computer" as against "takes an hour to perform with a team of 30 highly specialised professionals."

To stretch his metaphor.


The beauty of a programming language comes from being small and consistent. Such as Smalltalk, which has only 6 reserved words and implements message-passing as if it is a function call. This is the brilliant idea of Smalltalk - message-passing + blocks + booleans and out of this everything else is implemented. Classes are of second importance, but delegating (via message passing) to parent or another object was invented here.

Scheme, on the other hand, was also minimalist and consistent - s-expressions and about dozen special forms, and out of this everything else could be implemented, including closures, continuations, and classes and message passing, if needed (CS 61A 2008 does this for you).

Haskell is.. well, everyone know what Haskell is.

As for APL, in my opinion, the power of a language comes not from using esoteric math notation, but, like in CL or Scheme, from possibility to easy add new control structures (call/cc) and special forms (with macros) or even create a new languages, such as Arc.


Everyone certainly know what Haskell is: a thin layer over a typed lambda calculus. Everybody always overestimates and oversells its complexity.


Not just that.) It is also beautiful and clever minimalist and uniform syntax, due to currying, with very reasonable conventions. Type system syntax is also clever and adopted widely.

The only problem is these narcissistic hipsters, creating so much noise with meaningless blog posts about this or that way of writing abstractions. No surprise that I was confused - reading blogs instead of classic books is a waste of time leading to even more confusion.)


Minimalism is oversold. K specifically eschews minimalism, providing an essential list of operators and combiners (verbs and adverbs). It's no "6-word" language. Every character is a word in other languages.

We don't have six characters in the English language.


Right, and the English was designed to perform the function that it does. Perhaps a sane implementation of the language would use far fewer characters.


"designed" is the last word I would use to describe English :)


He was being sarcastic. I think his point is that because English was not designed it is far less efficient than it could be, and far more complicated than it need be. Therefore it is not a good language to imitate when we go to design programming languages.


minimalism in itself is not the key, but being small, consistent and good-enough is.


I had some fun with J. But it's (you know what I'll say) too cryptic..

Let's take an example from RosettaCode. For instance, merge sort:

    merge     =: ,`(({.@] , ($: }.))~` ({.@] , ($: }.)) @.(>&{.))@.(*@*&#)
    split     =: </.~ 0 1$~#
    mergeSort =: merge & $: &>/ @ split ` ] @. (1>:#)
I could probably write it, but read what it does in a few weeks?


I find it much easier to read some Q/KDB I've written earlier than the equivalent C++ or C#/Java. For three reasons:

1) It is extremely terse. A small block of code will do what pages of boilerplate class definitions, interfaces, setters/getters, declarations, typedefs etc. will achieve. There is usually no need to navigate through a spaghetti chain of virtual function calls.

2) It is interpreted. I can quickly (without compiling) cut out chunks of the code and run them to understand what they are doing. I can create variables on the fly, fill them with test data, and run the function. If it calls another function, I execute the name of the function and it returns the function code, since functions themselves are first class variables.

3) It actually becomes easy to read because it is easy to parse, for both machine and human. K/Q is read right to left, and so breaks some traditional conventions. For example, 5*6+3 will evaluate to 45, not 33. At first this can be disconcerting, but eventually you will realise that parsing a line of code no matter how complex it looks is quite simple - you mentally start at the rightmost token and start gobbling. This is probably why the interpretor is so fast as well.

As someone mentioned below, I'm not sure why you'd consider it "too cryptic". Anything is cryptic when you first encounter it. I suppose the difference is that Q/KDB provides an (extremely valuable) solution to the financial world - the handling of big data long before it was even called that (a tick database with billions or trillions of rows) so there is a valid motivation to learn the language and it doesn't feel cryptic at all once you've immersed yourself in it.


Python wasn't cryptic when I first encountered it. There are many things wrong with python, but using ordinary words for operators ("or", "in") was one thing it got very very right, and I wish there was a more functional/typed language that took the same approach.


APL and K draw their inspiration from math.

Python wasn't cryptic, but it is confusing unless you have a programming background.

the fact that "and" and "or" short circuit.

The fact that "1/2 is 0" (at least before Python 3), and that "1000+1000 is not 2000", even though "1000+1000 == 2000"

Or that "0.3*3 <> 0.9" (if you knew that <> means not equal)


This is a superb comment that has clearly come from experience. Thanks!


You'd be surprised. I'm more familiar with k (kona) than J, but many of those seemingly inscrutable blocks of punctuation become recognizable as a unit pretty quickly. You can, of course, break huge expressions like that apart and name the chunks.

kona: https://github.com/kevinlawler/kona A free implementation of a k3.2-like language.

There are equally strange patterns that you are probably used to. For example, I see stuff like

    PORTB &=~(1<<3);
when doing embedded development. That's C, and unlike in J, in C I have to also keep track of order of operations.


Maybe it takes more than a few weeks. People half-way across the world read text like this everyday: 中國哲學書電子化計

C looks like that to most non-programmers. We forgot how much we have acclimated to already.


Actually it's a very good example.

Is 中 a word? Or 中國 a word? Or 中國哲 is a word? One can't say until they read a full sentence, making sense of it and distinguishing words from context.

Similarly, can you clearly distinguish operators here ",`(({.@] , ($: }.))~` ({.@] , ($: }.)) @.(>&{.))@.(@&#)"?

Sure, we can acclimate to a lot. Assembler is also a language. But nobody gonna write Hacker News in it.


Don't mistake looking complex for being complex. These languages have very simple and clean grammars (or they'd fall apart quickly), which is quite a contrast to the dog's breakfast that is modern C++. A simple grammar = simple to read once you've learnt the syntax. There's no issue in having a lot of symbolic operators if they all work in a clean and consistent way.


Unfortunately, J's grammar isn't the cleanest. It uses juxtaposition to mean about a dozen different things depending on whether the tokens are data, first-order functions, or second-order functions and how many are juxtaposed. A variable could stand for any of these. If there are variables in the expression, you have to know which variables stand for which kinds of objects in order to parse the expression.


Why do you think a given language must have only one representation? There are core concepts at the heart of J and K. This tight syntax is one way they can be expressed, but there are others. We are certainly not limited to the simplistic character editors of the past. More than one simultaneous representation of a program can be shown and worked with -- concise, extended, graphical...


> APL is a mistake, carried through to perfection. It is the language of the future for the programming techniques of the past: it creates a new generation of coding bums. - Edsger W. Dijkstra


You can find an acidic Dijkstra quote for every language of his time.

Dijkstra, I think, viewed mere languages as an unhappy by-product of computer science and hankered to do it all with mathematics, actual utility be damned.


Alan Kay seems to think along similar (though not identical) lines these days. The sheer inertia of ordinary programming language implementations means that by the time you get to use it, you may realize that something "slightly different" would have been better, but you seldom have any other option than to "suck it up".


Economists have a term for it -- Path Dependence. Some decisions have a high enough reversal cost that they are in practice irreversible. This means that one's future decisions are dependent on historical decisions over which one might have had no control. In our case: once a technology is deeply rooted, it's hard to displace.

More closely, the Rogers model of diffusion[2] has 5 factors of diffusion (relative advantage, compatibility, complexity/simplicity, trialability and observability).

Generally even the most advantageous language transitions score well only in relative advantage. The other diffusion predictors are missing or weak.

[1] http://en.wikipedia.org/wiki/Path_dependence

[2] http://en.wikipedia.org/wiki/Diffusion_of_innovations


Dijkstra never had anything relevant or wise to say about any programming language (including the few he did like).

In general, he was good at the math part of computer science, terrible at the computer part.


The authors response in the comments to an overzealous attempt at political correctness was wonderful.


Not only was that comment (http://scottlocklin.wordpress.com/2013/07/28/ruins-of-forgot...) juvenile, it displays exactly the kind of privilege and ignorance which allows sexism to continue in the world of computer science and software engineering, and which dissuades women (and men who dislike this kind of behavior) from participating in the larger world of compsci/software engineering. In many cases, once you realize that these kinds of assholes rule the roost, it's easier just to go away and do your own thing, rather than disagreeing or speaking up. Especially when we get further support like your comment. "Oh, what a card he is!"

It's okay to disagree--I didn't find the "men use Apply(), not for" thing to be particularly offensive myself, honestly--but there are many ways to disagree without resorting to the kind of hateful, caustic language the author used. This is not how well-adjusted adults disagree when having a discussion.


I very much doubt the response reflects his attitude to women in computer science, and your comment regarding "this is not how well-adjusted adults disagree when having a discussion" is somewhat moot, as it wasn't a discussion. This is where I believe his anger came from, and where my support of his response also stems. The cause of women in computer science (which as a student in a very male dominated CS department, I wholeheartedly support) is not helped by people derailing everything in the manner in which the commentator did. Like many political and ethical discussions, there is a time and a place. If he had said something outrightly and obviously sexist, yes, call him on it. But as it is, I think he had every right to get annoyed and want to nip that derailing in the bud.


as it wasn't a discussion

It never got the chance to be a discussion, as the author responded to the opening volley with a shotgun blast.

That's his prerogative, of course, as it is his blog, and he can take his ball and go home if he wants to.

The cause of women in computer science (which as student of a CS department, I wholeheartedly support) is not helped by people derailing everything in the manner in which the commentator did.

It's not a derail. It is a topic which came up because of language the author used, and which is an ongoing problem in the community of software developers. While you are free to believe in a platonic ideal of "pure ideas and discussion," it doesn't exist in this messy world of relationships and human language and politics. If you are still a student, I suggest you take a gender/sexual/racial politics course and try befriending some people with very much opposite viewpoints to your own. You may be surprised at your shift in thinking.

But leaving that aside for the moment, let's say the author's main intent was to "nip that derailing in the bud." What do you think the most effective, least hurtful way to approach that would have been?

1) Ignore the comment, 2) assuming he is the administrator of the WordPress blog he writes on, delete/hide the comment, or 3) lash out in a juvenile, sexist, hateful fashion.

I'm going to go with 1 or 2 being better than option 3 any day of the week.


How is

> Apply() is the right way for grown-assed men do things.

sexist?

Am I not reading enough into it or something? As far as I can tell somebody was making a fuss about some non issue and was shot down for it.


If you are someone who is not a man, and you read this, how exactly are you supposed to take it? This article is written by a man and features exclusively discussion and pictures of men or languages invented by men. The whole article paints men == software developers/programmers. The only time it talks about a woman is when it tries to demean Lady Gaga for making money while extolling the virtues of male composers.


>This article is written by a man

So?

>features exclusively discussion and pictures of men or languages invented by men

You've got to be kidding me right? The majority of programming languages are made by men, it is statistically likely any discussion of programming languages involve those made by men. You might as well attribute any discussion of programming languages as sexist.

I'm not sure if this response should have a /sarcasm tag on the end because it's more sexist then the original comment.


You have to take a Herstory class at a Gender Studies Department to understand the blatant privileged sexism dripping from this quote.


It implies that only men are the audience, who would be deciding whether to use Apply() or something else.


Or it implies than in real life men WILL be 95% of the audience, and doesn't artificially alter his language.

Or it implies that whoever reads the text, man or woman, will be wise enough not to give a fuck for such minutiae.


> men WILL be 95% of the audience

It has been postulated, I think plausibly, that the commonality of such language is one reason why this gender disparity continues to exist.

Personally, I agree with this argument. If you disagree, that is fine -- but please show a little bit more respect.


>It has been postulated, I think plausibly, that the commonality of such language is one reason why this gender disparity continues to exist.

The idea that "the commonality of such language" is the reason, is like the idea that video games and action movies create killers and criminals.


Okay, I agree. The ideas indeed have much in common.

My point stands here equally well, for the same reason. (This idea I happen to disagree with, but that's not relevant.) The idea you mentioned is not obviously stupid. I don't see what dismissing the idea without refuting it adds to the conversation.


"and try befriending some people with very much opposite viewpoints to your own." Thank you for your judgements on my friendship circles; those I have relationships with are of very diverse opinions on gender/sexual/racial/wider politicial issues.

His reaction may have been juvenile and extreme, but I certainly don't think it was sexist. What makes you say that?


I disagree. In contemporary professional and technical society, the politically incorrect occupy a disadvantaged position. The person being accused of political incorrectness is automatically assumed to be a person of the worst sort - sexist, unthinking, mean.

Scott Locklin is playing the game of taking proud ownership of his own labels. "I'm not a member of your cool modern hipster club, I'm just a geek who cares about fast computation." For comparison, see also "I'm a man hating dyke, meh, I get more pussy than you."


> In contemporary professional and technical society, the politically incorrect occupy a disadvantaged position.

This is not true in the least. The "politically incorrect", as you put it, get free license to say and do sexist and racist things and its up to the people affected by that to speak up. When those people do speak up, usually their attempt to bring up that point is the subject of criticism.

> The person being accused of political incorrectness is automatically assumed to be a person of the worst sort - sexist, unthinking, mean.

That someone's tears and fears of being called out on a being a sexist is considered more of a problem then their sexist words and behavior is a key part of having privilege.

> For comparison, see also "I'm a man hating dyke, meh, I get more pussy than you."

That statement drips with misogyny, homophobia, and transphobia. If someone wants to take ownership of those aspects of themselves, that's fine, but that person deserves to be ridiculed for it.


The "politically incorrect", as you put it, get free license

No they don't. If they phrase factually correct statements the wrong way, society condemns and penalizes them. Larry Summers is a perfect illustration of this.

That someone's tears and fears of being called out on a being a sexist is considered more of a problem then their sexist words and behavior is a key part of having privilege.

It's not considered "more of a problem" by pretty much anyone.

Accusing someone of sexism is just a rhetorical weapon used by the privileged to make dissenters stop speaking truth to power. A person who's beliefs align them with modern liberalism has an extraordinary privilege - they do not suffer the presumption that they are bad people with evil beliefs. They can make content-free assertions, make ad-hominem attacks against those who disagree, while in contrast those who disagree must present mountains of evidence for their position.

(Oh yes I am being the jerk who uses the language of political correctness against itself.)

As for the statement you describe as "drips with mysogyny, homophobia and transphobia", it's a statement I heard made by a lesbian who was taking ownership of the word "dyke". If you wish to ridicule lesbians who claim "dyke" for their own (or blacks who claim "nigger" for their own) be my guest.


Yep. I was enjoying the article until I read that comment, at which point I lost all respect for the author.


> You may now go fuck yourself with a carrot scraper in whatever gender-free orifice you have available.

How the fuck is that "wonderful"? The comment was something along the lines of "I wish people wouldn't use that kind of language", and the response was in the form "If I insult you enough, I don't have to acknowledge your argument". Looking at the other comments, the one nitpicking about the choice of Lady Gaga is equally as useless, but strangely that one didn't get an angry reply.


That line was a bit juvenile, granted. The other comment wasn't really nitpicking. The commenter believed that Gaga has genuine musical ability, and the author wanted to make it clear that it was that classical training that made her the target of his blog. I don't want to misinterprete your meaning here, but are you trying to suggest the author would only react angrily to a female/feminist commentator?


Not to a female commentator (I've no idea if the commentator was female in this case). More to a comment that deals with issues that are usually associated with feminism. And I wouldn't use the word only; In a much more hostile manner than the response to an average nitpick? Yes, that sums it up. The Gaga comment was nitpicking; off topic and trivial. The comment on casual sexism was also. Yet only one of them actually had their issue addressed.


The Gaga nitpicking wasn't completely off topic though. The selection of Gaga as the target was chosen with the express purpose of demonstrating a point; Gaga has classical training, yet still produces what most (at least the author) consider to be trash. This mirrors his comment on the poor design of modern languages versus the APL languages, and a commentator didn't get this link so he spelled it out for them. On the other hand, the casual sexism call was completely off topic.


The one on lady Gaga had a point which relates to the content of my post. It was not only relevant, but interesting, as was his subsequent comment on Alcubierre drives, which is a subject for a forthcoming post already taking shape. The dork who was attempting to "correct" my language was not interesting, and totalitarian numskulls like that deserve nothing but whatever blistering scorn I can muster at that particular moment.

The fact that this is even "a thing" with 30-odd comments about it is a sad commentary on the illness which afflicts modern civilization. If this isn't clear: I don't like people with such "refined" sensibilities, and they are invited to never, ever read my blog. If you're that sensitive, you belong by yourself in a yurt, where nobody's language or opinions will bother you.

Personally, I've never met an actual programmer of any race, gender or creed who actually is that sensitive. People that sensitive are mentally ill, and don't have jobs. I've met plenty of power-mad Cotton Mather sickos who think they have the right to police the words and opinions of others though. The fact that people think witch hunts over a fscking word are OK; that is not OK.


I, for one, find it wonderful.

As for people saying "I wish people wouldn't use that kind of language" -- those are the scum of the earth.


So, here's the thing about conversations like this: We have a broken culture in technology. We can know we have a broken culture in tech by looking at the number of women within it.

Given that reality, I'm coming down more on the side of stopping and listening when I hear a criticism of elements of our culture, rather than becoming reactionary. The criticism may not be right...but, it's worth listening and thinking about it, rather than just jumping down someone's throat with an angry and childish insult. Is it "political correctness" or actually language that turns people away? I don't have the answers...but, programming wasn't always a male-dominated field. The fact that it has become more and more so over time tells me something is wrong. Listening to ideas on what that wrong thing (or things) might be is probably a start on fixing it.


I agree, but there is a time and a place for these discussions, and there are better ways of handling it than calling him out in his own comments. If you wanted to address perceived casual sexism through turns of phrases such as these, a better angle would be to collect examples from such blog posts and author a new post with the purpose of discussing this use of language.

I don't think the gender equality in the industry is worsening, as you say. It certainly has worsened in recent years, but the strong movement in support of women in CS is gaining traction. My opinion on it may be jaded by the "white male privelege", but to me it seems the balance is starting to be reinstated slightly. Numbers of females joining the department each year has been increasing since I entered the degree, and recent statistics released actually showed there is now more women than men in the post-graduate CS degrees. Whether this is the case everywhere, I don't know (and doubt), but it's certainly a start.


I think that would also be a useful approach. But, I've been involved in activist circles lately, and something I've learned in that is that calling behavior out when it happens can prevent resentments and problems from growing; and, having that culture of calling people on their shit can allow someone who may have walked away to feel empowered to speak up when they feel uncomfortable with something someone has said.

Again, I'm not sure I understand the nuances of why someone felt uncomfortable with this particular piece of language; but if I assume good intentions, all it costs me is a moment of my time to think about it and try to understand. And maybe, by taking that time and showing a little respect for that view, I'll help someone feel welcome into this community I've been a part of for so long. I think it's probably worth the trouble...and, in the end, I'm pretty sure I'm a better person than I was a couple years ago before I got involved in this kind of activism.

I don't know how to convince others to stop being reactionary, and instead simply think about what's being said. Anti-oppression training might be cool at tech conferences, but I suspect it'd be the least attended track.


"having that culture of calling people on their shit can allow someone who may have walked away to feel empowered to speak up when they feel uncomfortable with something someone has said." I certainly appreciate that perspective on it; it is empowering to the victims. However, as in this case, it can also alienate those they are calling if it was not genuine sexism. There are cases where similar language is used in a sexist fashion, and it would be a bit of a stretch to call this one of them, so in this case it only caused an adverse reaction. I guess there is a careful balancing act to empowerment.

"I don't know how to convince others to stop being reactionary" If it helps, your views have helped me consider my reaction to this. I understand and empathise with why he reacted how he did, but my reaction to his post was probably a step further into inappropriate. Thank you.


I looked for it after seeing this comment, and the response came across as kind of juvenile to me.


It was a juvenile response to a juvenile comment


If it was a juvenility contest, the response won by lengths. The response also won in mean-spiritedness and unnecessary escalation.


There is no shortage of excellent work being done today, too. With the large systems that are being created today I would take Python over APL or J any day, can you imagine building a company like Google or Facebook on top of APL, even assuming you would manage to find a team of good APL programmers? It is not a language fit for cooperation nor for large systems, it is a problem-solving language, back from the day where people used computers for relatively small tasks.

Plenty of the good things in APL are now available in other forms anyway. NumPy with it's powerful set of linear algebra primitives and the huge set of implemented algorithms is just as great at data crunching. Want mmapped files? No problem:

http://docs.scipy.org/doc/numpy/reference/generated/numpy.me...

That's not to say APL is not worth studying, it is built on powerful ideas. But to study the masters is to criticize the masters, not only blindly admire them. And history is not all romantic.


Maybe we've got it backwards. The only true problem in software development is miscommunication, and the bigger teams get the worse they perform per-programmer, with the most productivity loss experienced by the most senior team members. On large teams, the most experienced team members are often handicapped by having to operate in a context that remains comprehensible for junior team members, investing so much time in communication that they spend not luch of it on building things. that in turn produces a sort of lowest common denominator effect, where designs are deliberatey kept overly simplistic so they will be easy to communicate.

The more expertise is applied to a solution, the less likely it is that people can cooperate on that codebase. The languages and frameworks of today try to minimize the need for expertise to get things done, because expertise doesn't blend well with communicating about code. I think part of the ideas in the article is that by running away from expertise we have often arrived at inefficient solutions. We scale out, with huge codebases that are like cities, sprawling and vast yet with each component isolated and easy to comprehend. Learning from the masters means understanding complex ideas which are based on scaling up, on building a project-specific grammar that encapsulates the problem domain so elegantly that it allows for one line features, at the expense of being incomprehensible to someone new to the codebase or the field.


This actually sorta rhymes with the design goals of Java -- a broadly featuresome language, with lots of safety nets, that could be used relatively effectively by the median programmer.


Having worked in Java for several years this is both a great strength and weakness in my opinion. On the one hand it keeps everyone speaking the same language as opposed to some languages where it seems each team is speaking it's own dialect. On the other hand it can shackle the truly proficient, makes it easier for complacent programmers to remain so, and masks (temporarily) the incompetence of people who should not be writing production systems.


Yes, but Java acknowledges that the bulk of programmers lie near a median. It's a language for an industrial age, full of automation and great steel buttresses. Those who hanker for surgery, or lightsabres, or precision archery, or any number of analogical departures -- they naturally drift to other languages.


There are big companies built on top of APL like languages. One example is Morgan Stanley (K).

You are correct that not everything is built on K - typically there are gigantic concurrent enterprisey systems build in java/c++/scala which shell out to a computational core built in K.

You are also right that Python/Numpy is getting to be a good alternative to K, giving you less performance but better cooperation. That's why BankAm is building their SecDB clone on top of it.


I definitely miss having the Morgan Stanley site license for K at my disposal. I can't quite justify the money to get a license, just for me, so I've been slowly assembling a poor-man's substitute built on numpy, pandas, and hdf5.

K itself isn't as consistent as I'd like (odd corners like the auto-casting arrays, especially dict arrays to tables, and skipping NaNs in sums), but it's a great place to start.

J really needs a table or dictionary data type, or at least the tools to make one yourself.


If you can live with K3, you might want to take a look at kona: https://github.com/kevinlawler/kona/


Mind to share some of the background of yours? How did you get to know it, what were you doing with it, how did you learn it, what did you like about it?


Sure. At MS, I used it while working on mortgage prepayment and default models. Before we got the K license, I had been using SAS and R. There, I had been dumping aggregated (or sometimes not) CSV files from the database (Sybase) and running my model fits based on those.

I had a bit of exposure to A+ then, trying to debug bits of the interest rate model subsystem. I hated it. We were mostly using A+ as the wire protocol at that point, and I had endless problems getting the APL fonts set up, and so on. Non-ASCII was a huge black mark.

Once we got the license and expanded the team working on the models, we decided to switch over to using kdb+ for the basic data store. That switch was like going from night to day. I could operate over the entire dataset, rather than just on a subset. I could query things on the fly with real aggregations. Things that were a huge pain (and slow) using SQL were suddenly fast and easy, like "calculate the average default rate grouped by FICO in 25-point buckets, for loans issued after 2005" or "backfill missing data from the first non-missing observation."

The first example can be done in SQL, but it takes a whole lot more typing and is (IMHO) much more error-prone. The last, I'm sure you can do it, but it's a nuisance.

I first got comfortable with kdb+ as a database query language. I read "q for mortals" by Jeff Borror, and we actually had Jeff around for a while for questions.

After that, it we started actually using q to estimate the models. It was great at the data aggregation and was a pretty natural fit. It didn't have anything built-in, but so we wrote a few functions for multinomial logistic regression, and that was good enough. That part didn't have a huge advantage over using something like R. However, the interaction with the large data store was far superior, better enough that it was worth it to write our own routines to do the estimation.

We even used q to run the models in production. That was more of a reach, but it worked pretty well.

So, I liked kdb+'s ability to handle large data sets in a speedy way. It makes a fantastic query language. I liked q's conciseness. It doesn't matter much for actual in-production code, but if you're doing research, every character counts. It seems shallow, but I think it's a real effect. Writing a half-line of q to do a paragraph of SQL is a huge win. I liked q's array operations, but by then I'd already been using Numeric and numpy and such for a decade, so those weren't really new. It was nice to be able to write them easily, as opposed to something like "np.concatenate((foo,bar))", but it wasn't really novel.

Basically, it was kdb+ that sold me on q.


Thanks, that's very interesting. Q for mortals is online and seems to be a good read, if anyone's interested:

http://code.kx.com/wiki/JB:QforMortals2/contents


And R also has mmap, which is the the relevant package to reference rather than bigmemory in the original blog post?


bigmemory is a better package.


"To study today's programmers is to criticize them, not blindly admire them. And the present is not all romantic."


How is lush an old language? Looks like a language that was released last decade, but just never caught on.


Lush as a language was developed in 1987, just like it says on the website. Older versions of it are reading the numbers off of your checks when you deposit it in the ATM machine. The ideas it embodies are significantly older than common lisp: it is a very pure hackers lisp: sorta has the eau de zetalisp about it (not that I really know anything about zetalisp, I ain't that old, but I've read about it). Studying it will make your brain grow. http://lush.sourceforge.net/credits.html


Actually it says that Lush's predecessors were developed in 1987. Version 1 of Lush was released last decade.

But even if it was 1987, that's not older than Perl or Python.


Well, if you download, say, TL3 from Leon's page (a subset of SN), you'll immediately see that the "predecessors" are essentially the same language. Same Ogre, same UI. You can even call pieces of it in modern Lush. The name change is irrelevant. http://leon.bottou.org/projects/tl3


Perl was started in 1987, Python in 1989 [1].

[1] http://python-history.blogspot.com/2009/01/brief-timeline-of...


I just wish that making an error in the debugging context didn't destroy my state. If it weren't for that, I might have used Lush again after Professor LeCun's Machine Learning class.


"eau de zetalisp" => "water of zetalisp"


I used APL extensively for about ten years. I was introduced to the language by a physics professor in college who was quite vocal about having his students learn the language. I ended-up using it professionally for a wide range of applications, from inventory management and core business operating systems to robotics and DNA sequencing.

While the language is wonderful I would not use it for production systems today. It is my opinion that the language came to be too early and, as a result, has not seen innovation for decades. It also does not help that the leading APL implementations cost thousands of dollars to license. APL is still being used extensively in financial institutions in Wall Street.

J has been mentioned here a few times. Even though J is the work of APL's creator, Ken Iverson, whom I had the pleasure of meeting at various APL conferences, I consider it an abomination and a huge step backwards. I have a feeling Ken might have sought more commercial appeal by using ASCII characters rather than specialized notation. Ironic because of [1]. However, in making this switch, he created a horrible mutation that anyone can rightly recoil at. The language looks like a random vomiting of ASCII characters on the screen. Sad.

Many have commented on APL on this thread. In most cases it is obvious that these comments come from lack of or limited experience with the language. The power of APL comes from two things: Abstraction and Notation.

Abstraction is very much like objects in OO languages or specialized symbols in mathematics.

Notation --through specialized symbols-- is huge. Ken Iverson's own "Notation as a tool for thought" [1] covers this in far more depth than I could attempt here.

I'll just mention one example: music notation. A page in an orchestral score looks like gibberish to someone unskilled in the art. However, to a skilled professional it reads, well, like music. While I am not at that level, I can read guitar or piano sheet music and imagine, for lack of a better word, the music with reasonable accuracy.

That's what happens with APL if you really know it: You read it like music. Much of the criticism and misguided opinion comes from people who simply don't have this level of proficiency with the language. All other languages allow you to be a pretty decent "hobby" programmer in the sense that you don't have to be able to read it like music and you can still produce and maintain software. APL, for non-trivial work, requires true expertise, not superficial exposure. It rewards you with unparalleled productivity and expressive power. It really is a cool language, but you have to invest the time to learn it and read it like music.

Another way to put it: It's like vim. It's a shit code editor until you invest the time and effort to use it instinctively, to play it like a reasonably skilled musician would play a piano: without having to think about which finger to use to press a certain key to produce the desired tone.

That's APL. It's very different than using C or Java. It requires expertise, time and dedication. And perhaps that's the source of its demise. Today most programmers don't care about diving to those depths. Most want to learn the basics and poke around API and code libraries to get shit done and consider super-simple stuff like pointers confusing and complicated.

BTW, to me Python has a number of APL-like features that make it fun and productive. I really enjoy it.

[1] http://www.jdl.ac.cn/turing/pdf/p444-iverson.pdf


> BTW, to me Python has a number of APL-like features that make it fun and productive

Can you please elaborate on this and give a few examples?


Here's one example from playing around on Project Euler:

  def problem_1a():
      return sum(set(range(3,1000,3) +  range(5,1000,5)))
While verbose, this is the kind of structure you would see if this was written in APL. In fact, you could translate this almost directly into APL.


I used to know APL2 well, and worked on some large workspaces at Merrill Lynch, 10K's of LoCs. I really liked the language but not the job. There were language dialects, you could tell if somebody had a Sharp or STSC background, or if they had come from Morgan Stanley (the code at Merrill and Lehman was pretty similar, at least in the area i worked in). Interesting times.

___________

also compare "under" in J to loan patterns and all the resource allocation mechanisms in algol family languages

http://prog21.dadgum.com/121.html


Right. On the PC I started off with STSC. I had a brief exposure to APL on mainframes but the PC version was far more capable and useful. I liked some of the direction taken by IBM with APL2 (nested arrays and other primitives) while keeping to the concept of notation being just as powerful as abstraction.

Also, our 10K LOC workspaces is potentially equivalent to hundreds of thousands of lines of code in something like C. Another way to put it is that a single programmer can do the job of five or ten programmers in another language. And do it faster.


> It also does not help that the leading APL implementations cost thousands of dollars to license.

I'm wondering what your opinion is of A+ (dialect of APL from Morgan Stanley)? GPLv2 codebase, supposedly is/was used in a commercial context, so should be pretty stable. Unfortunately, the last activity on the project seems to be from 2008. To my uninitiated eye, it still appears to be the best thing out there for learning and dabbling in APL.

The A+ website is at http://www.aplusdev.org/


A+ is an earlier effort by Arthur Whitney, who created K and QDB (discussed in Scott's article).


> It also does not help that the leading APL implementations cost thousands of dollars to license.

FYI: The RaspberryPi port of Dyalog's APL is available for free. See http://packages.dyalog.com


The only downside to J notation is some of the primitives use more than one character (something K solves). Otherwise, the notes are just written differently. I find old school APL impossible to read, basically because I never bothered learning it. Yeah, it looks like line noise to the uninitiated: so does APL notation. It's easy enough to change primitive names to stuff which is more wordy, but by the time you're capable of it, the original notation becomes a feature, not a bug. Q'Nial was more wordy and easier to get into. I thought of using it, but the author is not interested in keeping it going.


> The only downside to J notation is some of the primitives use more than one character

You are missing the point. I could proclaim that from now on we are going to use the /$ character sequence to implement the integration function in a hypothetical language. I could similarly continue with that idea and create a bunch of transliterations and cover a wide range of functionality. And, yes, people would be able to learn it and learn to read it. One example of this is reading regular expressions.

However, having that that, you have not created a new tool for thought all you have created is the ability to --as I described it before because I despise it-- vomit a bunch of ASCII characters into a file and call it a program.

Notation is powerful. The association of symbols or icons with ideas is powerful. The play, stop, fast forward, fast rewind and other buttons on your remote are easy to recognize, read and use. You could purposely avoid using symbols and use words: "FFWD", "STOP", "PLAY", etc. However, using symbols (notation) is by far the most efficient and communicable tool.

Going back to the integral, well, the existing mathematical integral symbol conveys the entire concept very efficiently.

J, to some extent, comes from an era when it was downright painful to even attempt to print or display APL characters. You had to replace character ROMs in printers, graphics cards and terminals. You had to replace print heads on other machines. APL characters before expanded character sets and the ubiquity of GUI-based operating systems were painful to implement and communicate. J attempted to solve this by transliterating from the APL symbol set to an ASCII-based equivalent. When that happened one of the most powerful aspects of APL was thrown out the window.

That's why I call J an abomination. The functionality offered by J is probably an improvement of APL. However, walking away from attaching meaning to custom symbols was a huge step backwards. Perhaps Iverson thought the reason APL was not seeing wider adoption was the character set, gave in and sought to fix it by going to ASCII. In my not-so-humble opinion, that was a huge mistake. ASCII is gibberish. Symbols, much like a page full of mathematical equations, means creating and using a new language that become part of your mental process.


I don't think I'm missing the point: I think the notation is just fine -certainly nicer than regular expressions. You're just not used to it. Lots of people use K.


No, you are missing the point. Take a page full of intense mathematical notation or an orchestral score and translate it into either full words (i.e.: "integral" in place of the integral symbol) or one, two or three character ASCII mnemonics. You should realize that the original is far more powerful a tool for communicating thoughts, ideas and the subject at hand than a seemingly random dump of ASCII on paper.

You did mention that you did not bother to learn APL. That's the reason you don't get it. I am trying to use mathematical and musical notation to illustrate the value of notation. It is obvious I am not doing a good job.

Let's just disagree and move on.


It's true that APL was originally designed as a notation tool, and maybe J fails at that, at least for beginners (which I am; only a few months in), but it certainly works very well for me as a language. Bonus; if my prop trading algos ever get lifted, there are only a few people who will be able to make anything of 'em. I see above you're looking at its successor, K. Maybe you will like it better, but ultimately the idea was to reduce J's character set to one-character per primitive.


If you read about the history of APL you'll see that Iverson developed the basic notation for the description of processors at a low level.


> While the language is wonderful I would not use it for production systems today. It is my opinion that the language came to be too early and, as a result, has not seen innovation for decades.

What APL need that only more modern languages have?


> What APL need that only more modern languages have?

Let me preface this by saying that I've been away from regular use of APL for years. I've forgotten a lot of the details but I'll throw out o few seat of the pants thoughts on this.

In no particular order (and not specifically related to modern languages):

  The option for compilation would be very interesting.

  Within a function, variables are global by default.  Huge pain in the ass.

  Objects.

  A language-defined interface to C (or some other suitable language) for 
  when you need to speed things up.

  An extension to nested arrays that allows you to create C-like structures.
  I am going after the self-documenting nature of having structure members 
  have name-based access.

  An extension to function definition to allow for more than two arguments. 
  This would require a little language re-thinking.

  I would also look into making a hybrid that includes some C-like syntax
  with an APL core.  C/C++ -like comments and conditionals with bracketed
  groupings of APL code would go far in making code easier to read and 
  organize.

  Native C-like switch() primitive with APL-inspired extensions.  For example,
  switch could take a vector argument.

  APL-run-time-definable comparison operator.  Sometimes I want to apply a complex
  evaluation function to data rather than the simple equal, not equal, grater/less
  than, etc. operators.

  The ability to enforce data types if desired.

  A means of telling the compiler/interpreter to optimize code 
  --at the statement level-- for speed or resource (memory footprint) and 
  other criteria.  Certain operations can explode into huge multidimensional
  arrays which isn't always the best idea.

  Real-time extensions integrated into the language.

  Ability to use GPU resources for computation.

  Standard interface for custom hardware-based acceleration (read: custom 
  FPGA boards).  

  True binary vectors and arrays.

  More flexibility in multidimensional array indexing.

  A rethinking of the workspace model to better support multi-developer 
  environments.

  Genetic and Evolutionary computing primitives built into the language.

  Built in primitives for multi-threaded/multitasked computing.

  Built in primitives for network access and processing.

  Built in primitives for multi-core and distributed computing.

  Built-in primitives for data exchange.  For example, ingest or output 
  nested array data from/to JSON, SQL, XML or other modern data formats.

  Better pattern-matching primitives.  I'm thinking at least regular 
  expressions. 

  A general cleanup of the notation to remove lame text-based "fixes" 
  from an era when rendering non-ascii characters required custom ROMs 
  on the graphics card.  All APL notation should be symbolic.  ASCII 
  should be limited to strings, function, variable, object and other 
  constructs that require text.

  Make it open source and make the open source version far superior to 
  any available commercial version.

Like I said, I've been away from APL for a while. I am sure there are a lot more important enhancements I could suggest if I spent a year seriously getting back into the game. Today I have very little use for APL outside of using it as an advanced calculator of sorts, mostly when I do hardware design. Even then, sometimes it is more convenient to use Excel for that purpose because of the value in being able to communicate with others.


K answers quite a few of these (multiple args, multithreading, network built into the language, and automatic SSE/AVX use to name a few).

K4 is a much simpler and smaller language than APL - and yet the programs come out simpler and shorter, and usually much faster. Unlike APL, it can be tokenized and parsed in advance -- meaning that, at least theoretically, a compiler can be written. Arthur's implementation is a bytecode virtual machine, but AFAIK there is no type inference there.

e.g. there are only 3 conceptual data types in K4: atom, list=array=vector, and dict. A matrix is a vector of vectors (what APL would call a nested vector), vastly simplifying APLs nested and axis operators.


I'll look up the language. I never spent any time on it. I have to say that I am somewhat of a purist when it comes to the idea (and real power) of notation. If we are going to evolve computing we need to try to move away from trying to fit everything into some mutation of an ASCII bag. I am not proposing graphical programming, been there, done that, it blows. However, a carefully done true language (meaning something with a unique and concise notation with which to express ideas) could be incredibly powerful and evolutionary.

A quick comparison of the code necessary to calculate all primes up to a limit in K and APL:

  (!R)@&{&/x!/:2_!x}'!R

  (2=+⌿0=(⍳X)∘.|⍳X)/⍳X
This tells me that K was written to fit APL functionality into and ASCII character set. I get it. I understand where some of that came from around that time. As I said before, it was hard to display and generally deal with non-ASCII character sets. K was created around the time of Windows 3.0. It was still the wild west. I can totally see wanting to have the same level of abstraction with easy-to-deal-with ASCII transliterations. The problem is that this is absolutely going in the wrong direction. Notation (symbols, icons) are incredibly powerful and are at the core of some of APL's ability to become a tool for thought.

I feel the same way about languages such as ObjectiveC. Another abomination. There was no reason to do that. Nothing whatsoever was gained by saying: "Hey, let's do the same thing but come up with a different way to write it." Sure, O-C has a few nice things here and there but it did not advance computing in any appreciable way as far as I can identify.

As for the other extensions you listed come with K, that's probably great. I'd have to dive deeper into the language in order to offer even a superficial opinion on that.

One of the reasons I abandoned languages such as APL, Forth and Lisp, languages that I used extensively for many, many years is that they became less and less practical and relevant. I can apply C to nearly everything from embedded to system work and even in modern hybrids where FPGA's are integrated with capable microprocessors. On the hardware front languages like Verilog are very reminiscent of C and, as long as you understand that you are actually describing hardware and NOT writing software, are easy to pick-up with the appropriate background. You move up to languages such as C++ and other layers open up. PHP, Python, Objective-C when I absolutely must and Java if I have no choice. All of these are very flexible and relevant tools that have, for the most part remained relevant and useful for years. That range of applicability will never be achieved with something like APL. If an APL-like language is going to come to the forefront it will be for very specialized applications where it makes sense. It will not be to run a shopping cart on a website or control a servo on my robot. That's just reality. I love APL. I devoted a huge chunk of my professional life to it. I can't see using it or the wanna-be variants for anything today. Sorry.


K fits APL functionality into ASCII by selecting a smaller (but empirically, better suited) "base". It's not a transliteration of APL into ASCII.

e.g., the first element operator is unary asterisk e.g. "x", which is like C's pointer dereference (which gets the first element ..). There is no symbol for "last element" - instead you use "|x" meaning "first of reverse of x". Similarly, there are no compress/expand operators; Instead, there's a "replicate" operator which unifies and simplifies both (and has other uses to boot).

In fact, it looks like Arthur used the number of ascii symbols as a constraint for the number of primitives - which are all single characters. (They can be postfixed with a colon to force monadic comprehension, but that's not part of the operator). As a result, K is as much notation as APL is. The domain is "writing real world programs" instead of "expressing algorithms", and it shows, but it's still notation that -- once acquired -- reads like math or APL.

> One of the reasons I abandoned languages such as APL, Forth and Lisp, languages that I used extensively for many, many years is that they became less and less practical and relevant.

I suspecting it's going to make a come-back. The APL / J / K computation model is much easier to apply to GPUs. But prediction of trends is very hard -- doubly so when it is about the future :)


A long time ago in a galaxy far, far away I had a project on the table to build a hardware-based APL machine with enhancements and extensions to the language. We looked at it very critically and decided that, while applications might be out there, the cost of sales and educating the potential customers was just too great. Just because you can do something it doesn't mean you should. Today things might be different.


I've been doing the Project Euler problems and always love seeing the APL-like solutions. Amazingly expressive stuff that would be fun to get into some day.


My only knowledge of APL is second hand via Scott Meyer's "More Effective C++" where he gives it as an example of the benefits offered by lazy-evaluation, saying that rather than compute the results of each statement in series it makes a compound statement where it computes only the result needed.

In my own situation APL is a language looking for a problem - I don't currently have a use case for it.

http://books.google.ie/books?id=azvE8V0c-mYC&pg=PT121&lpg=PT...


1/2 our codebase is in APL. It is a good calculator.


If anyone here is proficient at J, please (pretty please) contribute a guide to Learn X in Y [0]!! I don't run the site, I just really want a J guide in this format.

[0]: http://learnxinyminutes.com/


Try this: http://www.jsoftware.com/help/primer/contents.htm

Or, download it and run one of the intro labs; that's even better.


As Scott Locklin said in a parallel comment, J already comes with excellent documentation and tutorials. This would be a better starting place: http://www.jsoftware.com/jwiki/FrontPage

As a language, J isn't ever going to fit comfortably into a "Learn X in Y" format -- there are too many unfamiliar ideas, it isn't just syntax.


As a random aside, why do APL programmers try to import their idioms into C? Take a look at the VM code in J or Kona and prepare for a shock.

It's too bad. I find VMs interesting, but I wouldn't touch that C with a ten-foot pole. It's IOCCC quality.


It's a distinct style of C that evolved over decades for implementing APLs. It's more akin to an embedded DSL than it is to normal systems code, and the reason people use it is not that it's obscure but that it fits the problem space well.

The "shock" of looking at it is exactly the same as the shock of looking at APL or K programs to begin with. They don't look anything like code that most people are used to, but since they tend to be highly regular, the impression of noise is misleading.


I'm not sure a general lack of useful commenting and a subsistence on single-variable names is defensible. It sure seems like a case of "You can write Fortran in any language".


There's a great link in the original comments to a video from 1974, where Iverson, Falkoff, and others discuss the origins of APL. It was a pleasure to watch. A peculiar contrast in language and attitude to today's rockstars and ninjas :)

https://www.youtube.com/watch?v=8kUQWuK1L4w


"Big data problems are almost all inherently columnar..."

Can you name some that are not?


Great article. It seems like once you get beyond the hype of Big Data, you get to some old technologies whose primary use cases are finally coming to light. The more things change...


q is a neat language, and I spent some time studying it when I was in finance, because it's useful to know q/kdb if you're a quant.

Here's an offensive but accurate taxonomy. There are 6-month languages/technologies and 20-year languages. The 6-month set are optimized toward being quick to learn fast. Java, for example, inherited C++ syntax while cleaning up the OOP, so the C++ programmers of the time could pick it up in quickly.

Now, q and kdb are 20-year languages. It doesn't take 20 years to be productive with them, of course; it's that they're designed to optimize a person's productivity over 20 years, and not in the first 6 months. Unlike the various distributed tools out there (Hadoop, Hive) it will take you a long time to learn them, because they aren't like anything else. Superficially, q is Lisp-like, but the performance implications are very different and it doesn't look like a Lisp at all. (It parses right-to-left, for one example; also, it's written as a stream rather than in nested s-expressions.)

Many of these modern tools being bashed have grown up in a world where 20-year tools can't really be built; they'll never catch on commercially in a world where people change jobs every 2 or 3 years, when MVPs are more important than solid decades-proof engineering, and where employers are pretty averse to their people taking time to just learn new technologies.

There's too much volatility in the world for it to make sense for most people to learn q and kdb right away. You can get half-decent in a couple months, but it'll take much longer to learn why those tools are (for some use cases) so damn powerful.

This 6-month vs. 20-year tool concern is part of why I've gravitated toward Clojure recently. I haven't seen anyone else do quite as good a job of balancing those two need sets as Rich Hickey and the Clojure community have. Clojure is (becoming) a 20-year language, but it's no harder to become basically productive in than Python or Ruby.


This paints a really misleading picture of what is available in Hadoop right now. Columnar data formats? Yep, we've got them-- see Parquet [http://parquet.io/]. Yes, Hive spends a lot of its time reading and writing data. This is because it decomposes SQL queries into sets of MapReduce jobs, all of which must take input from the filesystem and write it to the filesystem. That's one of the reasons Cloudera Impala was developed [http://blog.cloudera.com/blog/2012/10/cloudera-impala-real-t...].

If you don't like for loops, yep, we've got that too-- use Scala or another functional programming language to write your MapReduce jobs. Or use SQL, which is a lot more powerful than APL, and a lot more accessible to businessey types than functional programming. SQL is a declarative programming language, by the way-- I don't see any for loops there.

The efficiency argument makes no sense either. Does the author understand that a just-in-time dynamically recompiling virtual machine is faster than an interpreter? If so, he doesn't mention it anywhere. You know, sometimes old technologies are just... old. You could at least compare Java and the JVM to something like Lisp machines.

People overestimate the gains that are to be had from mmap. I am currently in the process of adding mmap support to HDFS and I know what I am talking about. mmap gives gains, but only when the data is coming from memory, and only when you can reuse the mmap. Otherwise, you're better off reading even a 512 MB file via read() and write(). The reason is that syscall overhead is not that high on modern UNIXes (like Linux), and page faults involve a transition into kernel space anyway.


Or use SQL, which is a lot more powerful than APL

How could that possibly be true? What does "powerful" mean? Certainly not that it can compute things that APL family languages can't, and the design of APL/J/K etc. as languages is so much better that it isn't even fun. SQL is the COBOL for the 21th century.

Does the author understand that a just-in-time dynamically recompiling virtual machine is faster than an interpreter?

As memory latencies relative to instruction execution times go up, this factor becomes less and less relevant. An even worse problem is that your "just-in-time dynamically recompiling virtual machine" can't even vectorize properly, whereas APL has high-level array manipulating operators that make it comparatively easy to generate vectorized code. Your "just-in-time dynamically recompiling virtual machine" would have to "intelligently" recover high-level patterns from seemingly nondescript for() and while() loops to do that. Guess what: it doesn't! When you consider the fact that memory latencies matter these days, the fact that an AVX vector instruction can do a whole lot of work in a single cycle (or a few of them at most), and the fact that the integer units that would otherwise be idle can handle dispatching the next primitive operation in the meantime, there's a potential for even an interpreter to kick your "just-in-time dynamically recompiling virtual machine"'s ass.


Yes, APL is Turing-complete, whereas SQL is not... until you consider vendor extensions. I guess I should have included the obligatory "I know that..." in the original post. The fact remains that SQL allows you to do complicated queries without having a computer science degree, something I have never seen with APL/J/K, Java, or any other programming language. SQL arrived on the scene in 1974, by the way-- it's not a "21st century" anything.

I agree that the JVM does not use memory well. For example, it has a high per-object memory overhead. And there are things such as lock widening which are a real design problem. But even considering these things, the JVM still manages to consistently beat other choices such as Perl, Python, Ruby, and so forth.

Yes, vector instructions are great, and Java can't really use them. There are also other builtin instructions such as the CRC ones which would be nice to have at the language level. We have been adding JNI segments to make use of these. It isn't the syntax of loops that is the problem, but rather, the presence of side effects in the language, that makes auto-vectorization difficult. Probably annotations should be added to make this easier, like Intel has done for C/C++.

None of this means that APL is going to come back. The scientists and engineers who originally used it back when 4kb was a lot of memory are just going to use R, MATLAB, Mathematica, or another choice like that and gain a lot of the same micro-optimizations you love. People who want to do big queries over giant data sets are going to keep using Hadoop.

The one thing that could maybe dethrone Hadoop is a system that made use of GPUs (graphics processors). Some of them have 512 cores now-- that's a lot of power, and Java can't really harness it at all. But GPU processors are really restricted in how they can communicate and how much memory they can access, so it would not be as much of a general purpose solution.


I have used Hadoop; it is miserably slow (as in 10 to 1000 times slower AND a lot harder to work with) then the comparable K program.

If mmap doesn't bring you significant gains, you are doing stuff wrong. E.g. You serialize objects. Don't.

Hadoop is horrible. I've heard Spark manages reasonable performance (subject to the shackles that Hadoop compatibility entails), but haven't used it.

If you think SQL is "more powerful" than APL... Qualify that. Because e.g. APL is Turing complete, whereas SQL isn't.


As a sidenote, SQL is: http://stackoverflow.com/questions/900055/is-sql-or-even-tsq.... Not that you'd even want to use it as such.


:) a corollary of Greenspun's tenth rule: every language spec will be revised at least until it is possible to implement a slow, incomplete version of Common Lisp inside it.


Why are people attributing k's magical powers to mmap? That's something which should be marginally faster when it's faster, if every other product that uses one or the other can be used as an example.


Some of k's magical powers come from mmap

> That's something which should be marginally faster when it's faster,

No. If used properly, it is crazy faster.

Let's say what you need from the data is element no. 1821942 in the stream.

Hadoop standard: read & deserialize & discard 1821941 items (just so you can get to the 1821942nd item). read & deserialize and use one item.

K standard: (equivalent to): seek directly to element 1821942, read it and use it.

except .. K does it slightly faster than that, using mmap: It just accesses the location for this element in memory. If it is not in memory, then the operating system will arrange for it to appear in memory in a way that's usually more efficient than read. If it is already in memory, then it is exactly one memory read.

Now, if Hadoop & friends stored their data in such a way that you could do a random access read (and used a read syscall) then mmap would _still_ be about 1000 times faster than a read() call for access if the datum is already in the o/s buffers.

The magic is not that K can read stuff faster -- e.g., it can't read a hadoop stream faster than hadoop can. The magic is that K makes it easiest to just store your data in a random-access mmapable way, so that mapping the data back in takes ~10ms, and from then on, you only pay for the data you actually use without having to explicitly read it -- and usually, you pay less than you would have paid if you explicitly read it.


> No. If used properly, it is crazy faster.

I've been impressed with, for example, Symas's lightning mdb: http://symas.com/mdb/microbench/

However, it's not as if someone replaced my Toyota with a Formula 1 car. Properly used mmap is very nice but some people are getting carried away with the superlatives.

I hope this doesn't come off the wrong way but I really wonder if the "crazy faster" amazingness you're referring to isn't a function of comparing K to Hadoop. Which, you know, might just be setting the bar way too low.


> Properly used mmap is very nice but some people are getting carried away with the superlatives.

Well, it all depends on your access patterns.

The bottom line is: in a modern (later than 1985 or so) Unix system, mmap is hardware based virtual memory; read() and write() can give you software based virtual memory. Who do you think is going to have better performance?

Compare and contrast:

a) mmap an 8GB file into 64-bit address space, start using immediately. only stuff you actually use gets read from disk.

b) using only read() access, read what you need, manage caching to avoid syscalls, etc.

It is unlikely b) can be faster at all, and it will only be competitive if the granularity of reading + the cost of cache management is low.

In cases where either granularity or cache management is expensive (which is common), mmap is likely to win big time.


Now, if Hadoop & friends stored their data in such a way that you could do a random access read (and used a read syscall) then mmap would _still_ be about 1000 times faster than a read() call for access if the datum is already in the o/s buffers.

Yeah, it's too bad this method doesn't exist:

http://hadoop.apache.org/docs/current/api/org/apache/hadoop/..., byte[], int, int)

It's too bad that this doesn't exist: http://www.parquet.io

It's too bad that this doesn't exist either: https://issues.apache.org/jira/browse/HDFS-4953

You are amazingly, astoundingly, misinformed.

It's no surprise that you are wrong about mmap as well. It's only a performance advantage when the PTE (page table entries) are already populated. The PTEs may not be populated, even if the file region in question is resident in memory. And even then it's only about 2x, not "1000 times faster". There was no "serialization and deserialization" involved in our benchmarks, either.


> Yeah, it's too bad this method doesn't exist: http://hadoop.apache.org/docs/current/api/org/apache/hadoop/..., byte[], int, int)

And how, if you may help me, do you access object #n in the stream (for non-trivial n)? Not byte n in the stream, which this appears to give you - but object record #n ?

> It's too bad that this doesn't exist either: https://issues.apache.org/jira/browse/HDFS-4953

Your patch is two weeks old, and you expect anyone to be aware of it?

> You are amazingly, astoundingly, misinformed.

No, I have inherited a hadoop project from people who did it "by the book". And then rewrote it in C with mmap to gain a 20-1000 fold improvement in speed depending on workflow. That had been in 2011, but I'd be surprised if things changed so significantly since then.

> It's no surprise that you are wrong about mmap as well. It's only a performance advantage when the PTE (page table entries) are already populated. The PTEs may not be populated, even if the file region in question is resident in memory. And even then it's only about 2x, not "1000 times faster". There was no "serialization and deserialization" involved in our benchmarks, either.

Are you using hadoop map reduce, or your own HDFS code? Because every single non-trivial map-reduce jobs that I've met serializes and deserializes objects, spends 90% of its time on I/O, serialization, network, compression and stuff - and that's actually the recommended way to do so.

Which is why, incidentally, spark manages to be so much faster (100 times claimed - I've heard of >70) than hadoop - it keeps stuff in memory.

It's no surprise that you are wrong about mmap as well - your description is correct if and only if you are going to read the entire file. Which is standard for hadoop map reduce, but actually not that common in a well designed program, and certainly not common in the K world.

And you certainly did not understand the 2x vs 1000 times faster comment, which stems apparently from your (faulty) understanding that everything has to be read. In an mmaped file, if you need one byte from the middle, you pay for reading one block from the middle exactly once. You do not need to pre-read everything, and you don't need to manage caching yourself. That's where the 1000x times come from.

If you do read everything, it's not 1000x times. But you've possibly wasted 1000x times memory -- and 1000x times I/O to get to that byte in the middle that you need.


You access object #n by using an index. Something that the Parquet file format includes. If you or your team wrote jobs that always read the whole file, you were doing it wrong.

No, our HDFS mmap testing did not include MapReduce. I'm pretty sure I've been repeating that over and over as well. Why don't you write your own test program in C if you don't believe me?

How about we listen to this Linus Torvalds guy? Have you heard of him? [http://yarchive.net/comp/linux/o_direct.html]

Right now, the fastest way to copy a file is apparently by doing lots of ~8kB read/write pairs (that data may be slightly stale, but it was true at some point). Never mind the system call overhead - just having the extra buffer stay in the L1 cache and avoiding page faults from mmap is a bigger win.


> You access object #n by using an index

And in a C (or K) mmapped file, you access the n's object by accessing a memory location. No index lookup, whether O(log n) or a hash table.

I'm indeed not familiar with Parquet - Hadoop failed me so badly I do not feel like I need to keep up to date with every new thing. I might take another look when when it's been around for a while.

> Right now, the fastest way to copy a file is apparently by doing lots of ~8kB read/write pairs

That again discusses reading the whole file. When mmap delivers the huge gains is when you don't actually need the whole file - rather, when you need parts of it that you discover while reading. Which in many workloads is the case (or would be the case if you did not have to conform with some framework's constraints about the order and kind of I/O you can do).

mmap gives you hardware virtual memory and assistance for caching; Every other solution is a software virtual memory and implementation for caching. While it has its own constraints (e.g., you have to store data in a directly usable layout, rather than "disk frozen" layout), if you do that, things do work exceptionally well when you do.


I don't have a dog in this fight, but note that the message you quoted - which starts with "right now" - is from over a decade ago. An awful lot has changed.


Yep. But our benchmarks are from this year, and they show the same thing.

The situation might improve when Linux gets support for hugepages on (regular) file-backed mmaps.


What is it you're working on exactly?

I'm having some trouble putting it together from context but I'm wondering if you're talking about manipulating files using mmap from... the JVM? That's kind of interesting.


Using mmap in Java is not really that interesting-- you just use FileChannel#map, which has been around for a long time. There is no need even for JNI. The only remotely "interesting" part is that Java doesn't provide munmap, so you have to work around that.

This whole thread makes me very sad. I might have to write a "Mythbusters" post about Hadoop at some point.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: