
Stages of denial in encountering K - kick
http://nsl.com/papers/denial.html
======
chrispsn
From Arthur:

    
    
        design goal: speed
        write fast, run fast, modify fast.
        minimize product of vocabulary and program size.
    

[https://a.kx.com/a/k/readme.txt](https://a.kx.com/a/k/readme.txt)

    
    
        comprehensive programming environments 
        (programming, files, comms, procs, ...)
        are depressingly complex.
        commonlisp, c/lib/syscalls, java, winapi
        are all in the 1000's of entry points.
        many with multiple complex parameters.
        with k we try to do more with less
        in order to write correct programs faster.
    

[https://web.archive.org/web/20041024065350/http://www.kx.com...](https://web.archive.org/web/20041024065350/http://www.kx.com/listbox/k/msg04791.html)

    
    
        > Just curious, but does the state of California charge you a large tax
        > for each character in a variable name?
        
        yes.
        
        we also try to fit our algorithms onto licence plates.
    

[https://web.archive.org/web/20040823040634/http://www.kx.com...](https://web.archive.org/web/20040823040634/http://www.kx.com/listbox/k/msg02291.html)

For some 'licence plate' K algorithm dissections, you might like this Twitter
account:

[https://twitter.com/kcodetweets](https://twitter.com/kcodetweets)

~~~
kstenerud
The readme examples remind me of Perl code. Example:

    
    
        print<>!~($/=$")+2*map 1..$s{$_%$'}++,<>

~~~
gerikson
This one comment managed to insult both K and Perl coders...

~~~
Ma8ee
I don't think Perl coders need any help with that.

~~~
wruza
For non-perl guys: It is a cryptic example, because it uses ‘system’ variables
in a non-obvious way, not because of syntax. I can read it syntactically just
fine, but have to look up for names to understand what it does. A fully
uncodegolfed variant would require the same effort here.

~~~
ricardobeat
Golfing is not seen so negatively for Perl. "Embrace the language" they say.

------
snicker7
I use to program financial applications in K/Q for a stint. Very elegant and
orthogonal language. And very well documented. One major issue I had with it
and other APL-ish languages: lack of skimmability.

I can easily skim thousands of lines of Perl and Java in seconds and have a
rough idea of what the code does. Reading K, however, requires careful and
deliberate attention to every single character. I found it useful to maintain
documentation on how to /read/ the code (almost a literate programming
approach).

~~~
BiteCode_dev
That's bad in my book.

I write a line once for a 100 times I read it.

~~~
Roboprog
OTOH, there is something to be said for reading fewer lines in some cases.

Conifer forest

Vs

Pine Pine Cedar Spruce Pine Cedar ...

~~~
threatofrain
Isn't Lisp that Conifer forest.

------
jes5199
the thing that's odd to me is that you could write this exact same post about
other languages - a person who learns Haskell ends up writing that exact same
range/reduce/plus statement. Modern lisps like Racket or Clojure take you
there. Forth programmers and Joy programmers already noticed that their two
languages had converged: and they'd write basically exactly this, too. Really,
it's only the "mainstream" languages that are different!

~~~
TurboHaskal
A Common Lisp programmer would write it like that too. He would then
disassemble the function, run away in terror and eventually come back to
rewrite the whole thing with a more efficient loop expression.

------
saagarjha
Look, I don't hate K. It's probably a pretty good language for the task it
seems to have been designed for, which appears to be numerical computing.
Being able to fit an entire code on a screen is an interesting concept; while
I'm not completely convinced it should be a goal in and of itself it's
certainly something I can't rule out as a possible productivity booster.

However, I still find it problematic, and it's not really about the language
itself: it's more about how it's promoted in the article, by the people in
this thread, and in general. Specifically, the claims it's perfectly readable,
and anyone who says anything otherwise is either stupid, lying, or too poor to
understand it are not useful. (Yes, you know who you are for that last one.
You're not helping your case.) The fact of the matter is that most programmers
are used to ALGOL-esque syntax, and there is significant value in being able
to lift concepts and interoperate between those languages at a superficial
level. Every "inspired" languages hits these issues, depending on how
"strange" it is: Objective-C is ugly, Lisp is alien, and on the far end APL
derivatives are simply labelled as unreadable. That doesn't mean they're bad
languages, but to those promoting it: drop the attitude, and understand why we
have reservations. Your language isn't the "true" way to program, that only
the chosen elect can understand and lecture us on. Don't ignore or gloss over
languages that have similar features to yours. We want to hear what the
interesting things are that you can do in the language, actual guidance for
how we can adapt to the language (not: "one-letter identifiers are just what
we do; deal with it"). Thanks.

~~~
yiyus
I love small languages (lisps, forths, apls) and think that there is much to
learn from them, but you need an open mind. You need to start from zero and
totally forget your ALGOL-mindset while you are learning the new paradigm.

It's like learning Chinese. Everyone learns it in China, and everyone there
will tell you that it's perfectly readable and you don't need to be
extraordinarily intelligent to speak it. But if you expect to figure it out
based on your knowledge of English, it will be terribly frustrating. This does
not mean that German or Dutch (which share roots with English) are better
languages, or even easier to learn. Do not expect your Chinese teacher to give
you a different answer from "these weird symbols are just what we do; deal
with it".

This article is quite good. Clearly the author understand your reservations
and he is saying: give it a try and it will click. The learning material might
be better, but it's there (start with Iverson and work from that). It's much
easier than learning Chinese!

~~~
ivanbakel
That's a very convenient analogy for your argument, but it doesn't make sense
for programming languages in general.

It's not really meaningful to compare the readability of sentences in English
versus Chinese. On the other hand, it's both meaningful and important to
compare readability of program snippets - and readability is more than just
the first-time learning burden.

Overly-concise code being difficult to maintain is a well-accepted fact. The
response shouldn't be to demand fluency - that's just sticking your head in
the sand.

~~~
yiyus
Analogies are just analogies. The point of my comment was that you need to
approach this with an open mind, forgetting what you already know from other
languages.

> Overly-concise code being difficult to maintain is a well-accepted fact.

Your well-accepted facts do not apply to this totally different paradigm. You
seem to think those well-accepted facts are universal. They are not.

I have spent the last two years learning about array languages and think it is
a fascinating topic, so I tried to give some advice (based on my own
experience) to those who want to make the same journey. My point is not that
you need fluency (you need some fluency to understand anything complicated),
it is that the fluency you have in other languages may be counterproductive.
Your comment is an excellent example of this.

Try to read a tutorial. Rewrite it using "meaningful names". All we have done
that at some point, and all we started finding the sort names more comfortable
to work with after a while. Don't take my word, give it a try.

~~~
wpietri
> The point of my comment was that you need to approach this with an open
> mind, forgetting what you already know from other languages.

Why do I need to do that?

I'm interested in solving problems for people, especially using computers. To
do that, I need to build teams and build software. I need to think about costs
and benefits, about longevity and sustainability. A great deal of that
thinking is driven by the world as it is, by the historical accidents that
have gotten us to where we are.

If this is a recreational activity, not meant to be practical, then great. I
know people who teach themselves Esperanto and Klingon, and they seem to have
a good time. Extreme Ironing looks fun, too. But if it's going to have some
bearing on my profession, then I'm unlikely to forget what I already know,
because in practice the people I will work with already know quite a bit, and
brains being what they are, they won't be forgetting it.

~~~
geocar
> Why do I need to [approach this with an open mind]?

Because those are the standard terms of intellectual discussion. People going
into such a conversation without an open mind are called idiots, and they're
not worth taking too seriously.

> I need to think about costs and benefits, about longevity and sustainability

Isn't it weird that people don't value that?

I mean, look at how many python programmers are out there! I have programs
written just five years old that don't work in a new python interpreter and
that's terrible for "longevity and sustainability". Good thing there are so
many cheap Python programmers so we can keep the costs down: We're going to be
rewriting this thing until we're old and gray!

I personally find this line of thinking quite unproductive though.

> I know people who teach themselves Esperanto and Klingon, and they seem to
> have a good time.

Hey that's great. I know a few Go programmers too. Loads of fun those chaps.

If only Esperanto or Klingon or Go were useful in some way....

~~~
wpietri
You omitted a key piece of what I said. If you'd like to answer the question I
actually asked, feel free. Bonus points if you demonstrate the values you
claim to champion here, like open-mindedness and eagerness for sincere
discussion.

~~~
kick
How did he not answer the question?

You asked why you should keep an open mind when encountering and talking about
a strange language. He pointed out that it's necessary for a good discussion
on it. He's not wrong.

You've dismissed every point he made while not knowing a single thing about
the topic at hand. That's not grounds for a proper discussion.

~~~
wpietri
That is not what I asked.

What I'm responding to in the thread is a question of pragmatism. In
particular, I'm taking issue with the notion that one can only approach this
"forgetting what you already know from other languages".

That's very much distinct from the notion of an open mind. And it's especially
distinct when we're talking about tools that must fit into a professional
context that involves decades of history, millions of people, and a
complicated set of economic factors.

For example, Imagine that you were working in a machine shop and I came to you
excited about a new screw-head. If you dismissed it out of hand, I might ask
you to keep an open mind while I explained why I thought it was better. But if
my explanation ignored that the Phillips head was the dominant choice, with
billions of drivers and hundreds of billions or perhaps trillions of screws in
existence, that would be another thing entirely.

The same logic applies to conlangs like Esperanto and Klingon. Is it possible
it would be better if we switched the whole world to Klingon? Sure. Might I
learn interesting concepts if I studied Klingon? It's possible. Should Klingon
be considered for a moment as an important thing for most people to learn? No,
not at all. It's utterly impractical.

So yes, keeping an open mind is valuable. But that's not something I've ever
heard seriously disputed, so editing me down to saying that is constructing a
straw man. Which also isn't grounds for a proper discussion.

------
kick
This was written by 'RodgerTheGreat, whose comments and submissions on this
site are a treasure trove.

~~~
smnscu
Thanks for the tip. Looks like they also have a webcomic:
[http://beyondloom.com/](http://beyondloom.com/)

------
caymanjim
Not mentioned in the article is that the K programming language is used for
kdb+, which is a fast in-memory time-series database that's heavily used in
the finance industry for securities trading systems. I've worked with it a
tiny bit, but the pros are all Dark Wizards.

------
carapace
This article should end with something like, "...and then you find out about
arcfide's Co-dfns compiler..." [https://github.com/Co-dfns/Co-
dfns](https://github.com/Co-dfns/Co-dfns)

You don't have to like it, but the existence of K, et. al. shows that _most_
of us are wasting a huge amount of time and energy using bad tools. These
things are so powerful and not _that_ hard to learn.

~~~
Nursie
It shows no such thing. It shows that you have found something you think you
can be productive in and that you like, not that the rest of us are on the
wrong track.

~~~
carapace
I can't agree. (BTW, I don't like K, and I don't use it, but if I did I'd like
to think I'd be productive in it.)

This little system runs circles around entire sub-industries of other
software. The fact that it exists and uses one or two orders of magnitude less
time and code than other systems is significant.

It's like axes vs. chainsaws. If the job is to log a forest the latter will be
better than the former.

~~~
Nursie
Less code is entirely irrelevant unless for some reason you're concerned about
a few k of storage for your source, less time appears to be speculation.

If people can make working, maintainable software other ways, particularly if
those other ways have large, established ecosystems of dependencies, then they
are likely doing it right regardless of how much you like apl.

~~~
carapace
Again, I don't _like_ APL.

Less code -> fewer bugs, all other things equal, eh?

> large, established ecosystems of dependencies

...are actually a symptom of failure to refactor. Ideally, over time, you code
base approaches the Kolmogorov complexity of the problems it solves.

> If people can make working, maintainable software other ways ...

...but it takes 10x-100x more code, RAM, CPU power, developer time, etc. than
if you hadda used K, then you're leaving money on the table.

------
sriram_malhar
I love the brevity of regular expressions and use them on a daily basis. It is
the same argument that keeps me returning to K: the syntax is terse and
compact, the semantics are simple and composable, and your eyes get used to
it.

Beyond a point however, I cannot read my own regex's after a month's absence.
Which is why I use perl's /x modifier extensively to split up regex components
onto multiple lines and to document them thoroughly, even if they are for
throwaway scripts, because I don't always throw them away!.

For example:

    
    
        $_ =~ m/^                         # anchor at beginning of line
                The\ quick\ (\w+)\ fox    # fox adjective
                \ (\w+)\ over             # fox action verb
                \ the\ (\w+) dog          # dog adjective
                (?:                       # whitespace-trimmed  comment:
                  \s* \# \s*              #   whitespace and comment token
                  (.*?)                   #   captured comment text; non-greedy!
                  \s*                     #   any trailing whitespace
                )?                        # this is all optional
                $                         # end of line anchor
               /x;                        # allow whitespace
    

(source:
[https://www.perl.com/pub/2004/01/16/regexps.html/](https://www.perl.com/pub/2004/01/16/regexps.html/))

This is where K fails me. It may not be a fault of the language, but everyone
in the community has bought into this strange idiomatic style. I can't imagine
debugging it, or checking it for correctness, or foisting it on a less
experienced developer. Here's a canonical example an xml parser, on their
website.

[https://a.kx.com/a/k/examples/xml.k](https://a.kx.com/a/k/examples/xml.k)

Where's the pedagogy? Where are the comments? Why is this line noise
considered acceptable?

~~~
geocar
> [https://a.kx.com/a/k/examples/xml.k](https://a.kx.com/a/k/examples/xml.k)

> Where's the pedagogy? Where are the comments?

Most of that document _is_ comments. There's a comment on almost every line,
very similar to your perl example. Comments begin with a "/" character which
doesn't have a function to the left (e.g. whitespace).

First we have some constants (L,W,B,S,R) which refer to the left-bracket,
whitespace (which includes blank), blank space, and slash and right-bracket.
We've also got some utility functions (cut;join). These are simple enough they
don't require any special explanation to the K programmer who reads this.

Then we have a function that produces an xml-entity from a character. The
author assumes octal is required, so (needlessly) converts to that. The octal
string (with a leading zero) is concatenated onto ";&#" then rotated so the
";" appears at the end (1! is cute). I would probably write this differently,
because: 1!";&#",$_ic is shorter.

We then have a function that does the reverse, cutting off the first three
characters after rotating (which is the ";&#" string again) and converts the
octal digits back into decimal. This is probably wrong because real XML
documents will probably prefer decimal entities, but perhaps the author wasn't
dealing with these. I would certainly write this differently if I changed oc
(as above).

Now we have the helper function xc and cx (whose names suggest they are
converting from character-to-xml and xml-to-character respectively). This is a
stylistic observation, we can also see this from the comment, or by reading
the code (if we know what XML is). These implementations are pretty basic,
just using ssr to do repeated search/replace on the entities (note that ssr
knows that ? character means any).

You get used to it.

> Why is this line noise considered acceptable?

One major challenge reading inscrutable perl scripts is knowing where the
execution begins. Perl just has so many rules for parsing it you really need
either wizardry or patience to know how to pull it apart, but K is _extremely
regular_ : there's only one way to parse it, and shortly after learning it you
also learn (quickly) you can insert trace statements that don't change the
meaning of the rest of the statement to learn a new operator (or a new use of
an operator you didn't know). I note this especially as it is extremely hard
to do in perl (and even other Iverson languages including APL and J).

Line noise is a subjective quality that goes away (at least in this case) when
you become more fluent in K. I don't believe this is necessarily true of all
compact languages though.

~~~
skybrian
If the comments were good (more like you wrote) then the ratio of comments to
text would be even higher. And as with writing assembly, the risk of comments
getting out of sync with the code is higher, too.

~~~
geocar
Thank you. The narration is what happens in my brain when I read it. I don't
need it in the source files. Keeping the file short is the best way to keep it
consistent (what you refer to "getting out of sync")

~~~
skybrian
This seems similar to how Unix commands have both short and long names for
flags. Single-character flags are easier to type, but also easier to mistype
or misread since there is less redundancy.

It seems like K would be a particularly suitable language for having more than
one syntax. The short syntax, once you get used to it, would be better for
keyboard input and expert whiteboard discussions, but it might also be nice if
there were also a standard syntax that was longer and closer to what most
people expect? An editor could automatically translate between short and long
syntax, and this would be helpful for making sure you typed what you think you
did.

~~~
geocar
I think that's part of the theory behind q, which trades the monadic (unary)
definition of operators for names, so: +: becomes flip, =: becomes group; ?:
becomes distinct, and so on. I'm not convinced though, because Python+numpy
has most of these operations (and those it doesn't aren't particularly
difficult to implement), so it seems reasonable you could implement an
environment almost as good as q[1].

But whilst the k/q _operators_ are certainly useful, the Key thing is the
_notation_. The notation is _really_ valuable, and it seems hard to get it
until you understand the notation well enough that it starts changing how you
think about programming: numpy.matmul(x,y) might do the same thing that x+.y
does, but the latter suggests more. I recommend reading Iverson's paper[2] on
the subject, although you might find reading §5.2 before the beginning to be
helpful in putting into context what exactly is meant by notation here.

[1]: There's a lot missing still. Good tables, efficient on-disk
representation of data, IPC, views, and others-- all of which will be hard to
do in Python without limiting yourself to a subset of Python that might not
feel like Python anymore anyway.

[2]:
[http://www.eecg.toronto.edu/~jzhu/csc326/readings/iverson.pd...](http://www.eecg.toronto.edu/~jzhu/csc326/readings/iverson.pdf)

------
dang
A million-line program isn't readable by anybody, no matter how readable the
language is. If the equivalent program can be written in, say, a thousand
lines in some more concise language, that's more than worth the learning
curve, even if the language is strange and off-putting.

~~~
goto11
I doubt that K can reduce line count by a factor 1000 though. The examples in
the article is mostly about shorter identifiers like "!" instead of "range"
and a compact notation. That is perhaps a factor 5 not a factor 1000.

The example with a for-loop for summing a range is a blatant strawman. In
which modern language would that be idiomatic?

Furthermore the examples only show a particular use case: Processing lists of
numbers. How does the benefits measure up for all the other stuff a million-
line program does?

That said, if you really have a million-line program doing mostly numerical
processing, then I'm sure it would be a massive benefit to switch to a
programming language optimized for this task.

~~~
geocar
> I doubt that K can reduce line count by a factor 1000 though.

It adds up! Consider something like this:

    
    
        (defun count (list)
          (let ((hash (make-hash-table)))
            (dolist (el list)
              (incf (gethash (cadr el) hash 0) (car el)))
            (let (result)
              (maphash (lambda (key val)
                         (push (list val key) result))
                 hash)
              result)))
    

That's just #:'=:

~~~
Veedrac
Python 3.7+: Counter(elems).values()

But we can play this game both ways. What's the K for (Counter(a) &
Counter(b)).most_common(3)?

~~~
geocar
It's a little funny comparing a language with another language+plus-its-
entire-ecosystem, but k fares remarkably well I think.

Counter could be #:'=: (count each group)

    
    
        c:#:'=:
    

Values would just be dot.

Counter[x]&Counter[y] is a bit tricky to write, because while the
documentation says set intersection, what they really mean is the set
intersection of the keys with the lower of the values.

This is entirely clear in k; First, understand I have a common "inter"
function that I use frequently (it's actually called "inter" in q):

    
    
        n:{x@&x in y}
    

and from there, I can implement this weird function with this:

    
    
        i:{(?,/n[!x;!y])#x&y}
    

I've never needed this particular function, but I can read out the characters:
distinct raze intersection of keys-of-x and keys-of-y, taking [from] x and y.
It sounds very similar to definition of the function I gave above (if you
realise the relationship between "and" and "lower of the values"), and this
was painless to write.

most_common could be: {k!x k:y#>x} but if I'm going to want the sort (in q) I
might write y#desc x which is shorter. In this I can save a character by
reversing the arguments:

    
    
        m:{k!y k:x#>y}
    

so our finished program (once we've defined all our "helper functions" in a
module) is:

    
    
        m[3] i . c'(x;y)
    

So we're looking at 13 lexemes, versus 19 - a minor victory unless you count
the cost of:

    
    
        from collections import Counter
    

and start counting by bytes! But there's more value here:

All of these functions operate on data, so whether the data lives on the disk
or not makes no difference to k.

That's really powerful.

In Python I need more code to get it off the disk if that's where it is.

I also may have to deal with the possibility the data might "live" in another
object (like numpy) so either I trust in the (in)efficiency of the numpy-to-
list conversion, or I might have to do something else (in which case really
understanding how Counter[a]&Counter[b] works will be important!).

I might also struggle to make this work on a big data set in python, but k
will do just fine even if you give it millions of values to eat.

These things are valuable. Now if you're interested in playing games, let's
try something hard:

    
    
        "j"$`:x?`f
    

It creates (if necessary) a new enumeration in a file called "x" which
persists the unique index of f, so:

    
    
        q)"j"$`:x?`f
        0
        q)"j"$`:x?`g
        1
        q)"j"$`:x?`h
        2
        q)"j"$`:x?`f
        0
    

How would you do this in Python?

~~~
Veedrac
> It's a little funny comparing a language with another language+plus-its-
> entire-ecosystem

This is pretty much my point: any ‘reimplement this weird built-in thing in
$OTHER_LANG’ is going to involve overheads unless $OTHER_LANG also has that
thing. I don't think keeping most types out of prelude should be a downside;
you can always ‘from myprelude import * ’ if you disagree.

> How would you do this in Python?

I don't really know what "persists the unique index of f" means, but this
seems similar to shelve. I can _misuse_ that to give the same effect as what
you showed.

    
    
        with shelve.open('x') as db: db.setdefault("f", len(db))
        >>> 0
        with shelve.open('x') as db: db.setdefault("g", len(db))
        >>> 1
        with shelve.open('x') as db: db.setdefault("h", len(db))
        >>> 2
        with shelve.open('x') as db: db.setdefault("f", len(db))
        >>> 0

~~~
geocar
> I don't think keeping most types out of prelude should be a downside

I don't understand what that means.

> I don't really know what "persists the unique index of f" means, but this
> seems similar to shelve. I can misuse that to give the same effect as what
> you showed.

I think you got it, but it seems like a lot of typing!

How many characters do you have to change if you want it purely in memory? In
k I just write:

    
    
        `x?`f

~~~
Veedrac
‘Prelude’ is the set of standard functions and types that you don't have to
import to use. Python comes with a rich standard library, but most require
imports to use.

> How many characters do you have to change if you want it purely in memory?

When the question is if ‘K can reduce line count by a factor 1000’, shorter
keywords hardly cut it. ‘setdefault’ is wordier than ‘?’, but as something
you're only going to use on occasion, so what? And d.setdefault(_, len(d)) is
something you'll use maybe once a year.

shelve acts like a dict, so you can do the setdefault dance the same way on
in-memory dicts.

~~~
geocar
> ‘Prelude’ is the set of standard functions and types that you don't have to
> import to use. Python comes with a rich standard library, but most require
> imports to use.

I might not understand this about python.

    
    
        $ python3
        Python 3.7.6 (default, Dec 30 2019, 19:38:26) 
        [Clang 11.0.0 (clang-1100.0.33.16)] on darwin
        Type "help", "copyright", "credits" or "license" for more information.
        >>> Counter
        Traceback (most recent call last):
          File "<stdin>", line 1, in <module>
        NameError: name 'Counter' is not defined
    

Should that have worked? Is there something wrong with my Python installation?

> but as something you're only going to use on occasion, so what? And
> d.setdefault(_, len(d)) is something you'll use maybe once a year.

If it takes that many keystrokes, you're definitely only going to use it once
per year! This is the normal way you make enumerations in q/k.

The low number of source-code bytes _is_ the major feature in k. I've written
some on why this is valuable, but at the end of the day, I don't really have
anything more compelling than _try it you might like it_ , and _it works for
me_. This is something I'm working on though.

\-
[https://news.ycombinator.com/item?id=8476113](https://news.ycombinator.com/item?id=8476113)

\-
[https://news.ycombinator.com/item?id=10876975](https://news.ycombinator.com/item?id=10876975)

~~~
Veedrac
Counter is not in Python's prelude. Therefore you have to import it in order
to use it.

My comment about prelude was in response to you saying I was using Python's
“language+plus-its-entire-ecosystem”. This is not true; Counter is part of the
standard library that comes with the language. It is not from a third-party
package.

> If it takes that many keystrokes, you're definitely only going to use it
> once per year! This is the normal way you make enumerations in q/k.

You don't use this frequently in other languages because you almost never care
about the index of a value in a hash table (because it's ridiculous), and you
almost never want to brute-force unique a list (because it's a footgun).

~~~
geocar
> you saying I was using Python's “language+plus-its-entire-ecosystem”. This
> is not true; Counter is part of the standard library that comes with the
> language. It is not from a third-party package.

I'm not sure I agree "third-party package" is a good/useful place to draw the
line, but I don't think it's terribly important. Sorry.

> you almost never care about the index of a value in a hash table (because
> it's ridiculous)

So I want to query all of the referrer urls I've seen recently. I only see a
few of them, so I want to have a table of all the URLs, and another table with
one row per event (landing on my page). In SQL, you might have a foreign-key
operation, but in q, I can also make the enumeration directly. I don't really
care about the value of the index, just that I have one.

~~~
Veedrac
I wouldn't want to use an index as a long-lived key like that, since it makes
ops like deletion a footgun.

For temporary calculations I'd just `{u: Url(u) for u in url_strings}` for the
table of URLs, and stick the `Url` objects in the table of events.

~~~
geocar
I don’t understand any of this. I don’t know what footgun is. I have no idea
what a “URL” object is or how you stick it in a table of events (on disk? in
memory? in a remote process called a “database”). I’m don’t know what you mean
by temporary calculations.

I have fifty billion of these events to handle every day, and I do this on one
server (that is actually doing other things as well). It is not an option to
“just” do anything at this scale if I want to keep costs under control.

~~~
Veedrac
Sorry, it's a bit weird not sharing a lexicon. I get the impression Q is a
whole different genealogy of programmers.

In Python, although you _can_ just use shelve to store data on disk, in
practice this is considered a bad idea beyond very simple cases. Valuable data
wants the guarantees that real databases provide, like ACID. shelve doesn't
provide this, and IIUC nor does kdb+.

So if you're handling 50 billion events a day, live, and you need these to
persist, you'd use SQL or something similar. That would then ultimately
determine how you add and manipulate records.

If you don't care that much if you lose the data on a crash, that's when we're
talking about temporary calculations. In Python, rather than having two tables
like (eg.)

URLs:

    
    
        url_id  url          count
           1    google.com    300
           2    python.org    400
           3    example.net   200
    

Requests:

    
    
        request_id   data   url_id
            1       'spam'     2
            2       'eggs'     2
           ...
           900       'ham'     1
    

you would make a custom type, aka. ‘Url’, containing ‘url’ and ‘count’ as
object fields, and then store your requests as a list containing references to
those ’Url’s.

    
    
        urls = {
            "google.com":  Url("google.com",  300),
            "python.org":  Url("python.org",  400),
            "example.net": Url("example.net", 200),
        }
    
        requests = [
            Request("spam", urls["python.org"]),
            Request("eggs", urls["python.org"]),
            ...,
            Request("ham",  urls["google.com"]),
        ]

~~~
geocar
> ... like ACID. shelve doesn't provide this, and IIUC nor does kdb+. So if
> you're handling 50 billion events a day, live, and you need these to
> persist, you'd use SQL or something similar. That would then ultimately
> determine how you add and manipulate records. …

ACID is overrated. You can get atomicity, consistency, isolation and
durability easily with kdb as I'll illustrate. I appreciate you won't
understand everything I am saying though, so I hope you'll be able to ask a
few questions and get the gist.

First, I start write my program in g.q and start a logged process:

    
    
        q g -L
    

This process receives every event in a function like this:

    
    
        upd:{r[`u?y`url;y`metric]+:1}
    

There's my enumeration, saved in the variable "u". "r" is a keyed table where
the keys are that enumeration, and the metric is whatever metric I'm tracking.

I checkpoint daily:

    
    
        eod:{.Q.dd[p:.Q.dd[`:.;.z.d];`r] set r;.Q.dd[p;`u] set u;r::0#r;system"l"}
    

This creates a directory structure where I have one directory per date, e.g.
2020.03.11 which has a file (u or r) referring to the snapshots I took. I
truncate my keyed table (since it's a new day), and then I tell q the logfile
can be truncated and processing continues! To look at an (emptyish) tree right
after a forced checkpoint:

    
    
        total 24
        drwxr-xr-x  4 geocar  staff  128 11 Mar 16:59 2020.03.11
        -rw-r--r--  1 geocar  staff    8 11 Mar 16:59 g.log
        -rw-r--r--  1 geocar  staff  206 11 Mar 16:55 g.q
        -rw-r--r--  1 geocar  staff  130 11 Mar 16:59 g.qdb
    
        geocar@gcmba a % ls -l 2020.03.11 
        total 16
        -rw-r--r--  1 geocar  staff  120 11 Mar 16:59 r
        -rw-r--r--  1 geocar  staff   31 11 Mar 16:59 u
    

The g.q file is the source code we've been exploring, but the rest are binary
files in q's "native format" (it's basically the same as in memory; that's why
q can get this data with mmap).

If I've made a mistake and something crashes, I can edit g.q and restart it,
the log replays, no data is lost. If I want to do some testing, I can copy
g.log off the server, and load it into my local process running on my laptop.
This can be really helpful!

I can kill the process, turn off the server, add more disks in it, turn it
back on, and resume the process from the last checkpoint.

You can see some of these qualities are things only databases seem to have,
and it's for that reason that kdb is marketed as a database. But that's just
because management has a hard time thinking of SQL as a programming language
(even though it's there in the name! I can't fault them, it is a pretty
horrible language), and while nobody wants to write stored procedures in SQL,
_that 's one way to think about how your entire application is built in q._

That's basically it. There's a little bit more code to load state from the
checkpoint and set up the initial days' schema for r:

    
    
        u:@[get;.Q.dd[.Q.dd[`:.;last key `:.];`u];{0#`}];
        r:([url:`u$()]; req:0#0; imp:0#0; clk:0#0; vt:0#0);
    

but there's no material difference between "temporary calculations" or ones
that will later become permanent: All of my input was in the event log, I just
have to decide how to process it.

~~~
Veedrac
I mean, sure, if your problem is such that a strategy like that works for you,
I'm not going to tell you otherwise. You can log incoming messages and dump
data out to files easily with Python too. I wouldn't want to call that a
‘database’, though, since it's no more than a daily archive.

~~~
geocar
Yes! "databases" are all overrated too. Slow expensive pieces of shit. No way
you could do 50 billion inserts from Python to SQL server on a single core in
a day!

I'm so glad k isn't a "database" like that.

~~~
Veedrac
It's a bit unfair to compare the speed of wildly inequivalent things. RocksDB
would be more comparable, but even there it is offering _much_ stronger
resilience guarantees, multicore support, and gives you access to all your
data at once.

Calling them expensive is ironic AF. Most of them are free and open source.

~~~
geocar
> It's a bit unfair to compare the speed of wildly inequivalent things.

Yes, but I understand you keep doing it because you don't understand this
stuff very well yet.

> RocksDB would be more comparable

How do you figure that? RocksDB is not a programming language.

If you _combine_ it with a programming language like C++, it can, with only 4x
the hardware just about keep up 1/6th of my macbook air[1].

RocksDB might be more comparable to gdbm, but it's not even remotely like q or
k.

[1]: And that's taking facebook's benchmarks at their word here, ignoring how
utterly synthetic these benchmarks read:
[https://github.com/facebook/rocksdb/wiki/Performance-
Benchma...](https://github.com/facebook/rocksdb/wiki/Performance-Benchmarks)

> much stronger resilience guarantees,

You're mistaken. There is no resilience guarantee offered by rocksdb. In q I
can backup the checkpoints and the logs independently. It is trivial to get
whatever level of resilience I want out of q just by copying regular files
around. RocksDB requires more programming.

> gives you access to all your data at once

You're mistaken. This is no problem in q. All of the data is mmap'd as soon as
I access it (if it isn't mmap'd already).

> Calling them expensive is ironic AF. Most of them are free and open source.

If they require 4x the servers, they're _at least_ 4x as expensive. If it
takes 20 days to implement instead of 5 minutes, then it's over 5000x as
expensive.

No, calling that "free" is what's ironic, and believing it is moronic.

~~~
Veedrac
> How do you figure that? RocksDB is not a programming language.

I'm comparing to the code you showed. You're using the file system to dump
static rows of data. All your data munging is on memory-sized blocks at
program-level. Key-value stores are the comparable database for that.

> You're mistaken. This is no problem in q. All of the data is mmap'd as soon
> as I access it (if it isn't mmap'd already).

Yes, because you're working on the tail end of small, immutable data tables,
rather than an actually sizable database with elements of heterogeneous sizes.

> In q I can backup the checkpoints and the logs independently. It is trivial
> to get whatever level of resilience I want out of q just by copying regular
> files around.

Yes, because you don't want much resilience.

\---

What you're doing here is _incredibly_ simplistic. It's not proper resiliency,
it's not scalable to more complex problems, and it's not scalable to larger
workloads. An mmap'ed table and an actual database are different things.

It works fine for you, but for many other people it's not.

~~~
geocar
> You're using the file system to dump static rows of data

That's what MySQL, PostgreSQL, SQL Server, and Oracle all do. They write to a
logfile (called the "write ahead log") then periodically (and concurrently)
process it into working sets that are checkpointed (checked) in much the same
way. It's a lot slower because they don't know what is actually important in
the statement except what they can deduce from analysis. Whilst that analysis
_is_ slow, they do this so that structural concerns can be handed off to a
data expert (often called a DBA), since most programmers have no fucking clue
how to work with data.

That can work for small data, but it doesn't scale past around the 5bn
inserts/day mark currently, without some very special processing strategies,
and even then, you don't get close to 50bn.

> All your data munging is on memory-sized blocks at program-level.

That is literally all a computer can do. If you think otherwise, I think you
need a more remedial education than the one I've been providing.

> What you're doing here is incredibly simplistic. It's not proper resiliency,
> it's not scalable to more complex problems, and it's not scalable to larger
> workloads. An mmap'ed table and an actual database are different things.

Yes, except for everything you said, nothing you said is true in the way that
you meant it.

Google.com does not query a "database" that looks _any_ different from the one
I'm describing; Bigtable was _based_ on Arthur's work. So was Apache's Kafaka
and Amazon's Kinesis. Stream processing is undoubtedly the future, but it
started here:

[https://pdfs.semanticscholar.org/15ec/7dd999f291a38b3e7f455f...](https://pdfs.semanticscholar.org/15ec/7dd999f291a38b3e7f455f39f242820c7939.pdf)

Not only does this strategy get used for some of the hardest problems and
biggest workloads, it's quite possibly the only strategy that _can_ be used
for some of these problems.

Resiliency? Simplistic? I'm not even sure you know what those words mean.
Putting "proper" in front of it is just weasel words...

~~~
goto11
> I'm not even sure you know what those words mean.

I don't think you are doing your case any favors with such rhetoric.

~~~
geocar
It wasn't rhetorical.

------
heinrichhartman
Looks like the author figured it all out at the very beginning:

> If each punctuation character in the K is a separate part of speech, and you
> reverse their order
    
    
         +      /     !    100
         plus reduce range 100
    
    

This seems to be how this works. Compare that to math equations like these
ones:

[https://en.wikipedia.org/wiki/Maxwell%27s_equations#Formulat...](https://en.wikipedia.org/wiki/Maxwell%27s_equations#Formulation_in_SI_units_convention)

You would not expect them to read them like a paragraph. You are expect to
read them symbol by symbol and unpack from the inner out.

K seems to be a dense language. Expect to read it symbol by symbol, not word
by word or line by line.

If will be come more readable if you (1) slow down (2) get used to the
programming patterns.

Not saying K is a good language or anything. But the frustration of the author
seems to have more to do with impaticence and mismatched expectations than
with flaws in the language itself.

~~~
yiyus
The author is a K professional developer who works with K on a daily basis and
have implemented a K interpreter in JS (oK). He is not frustrated, he is
making a point. It looks like you perfectly got the point.

~~~
heinrichhartman
Thanks for the clarification.

------
aw1621107
I've seen a few posts here and there over the years about array languages, but
I can't recall any (or any discussion in their comments) that have mentioned
any disadvantages beyond more human factors.

So for the k/q/APL/array language gurus out there: in what situations would I
_not_ want to use k or another array language, if the relative obscurity of
the syntax and mental model were not a factor (e.g., assuming you, your team
(if any), and anyone who would ever look at your code would be very familiar
with the language and idioms)?

I may not want to choose Go or Java because of latency concerns, or Rust
because of compilation speed/iteration time concerns, or C/C++ because of
safety issues, or Python/Ruby/Javascript because of performance issues. Why
would I not want to pick k?

~~~
7thaccount
Performance depending on your use case. APL/J/K can be very fast for an
interpreted language, but they won't beat C, C++, Fortran, and Java in most
cases.

Also, think about distribution. With most of the array languages, it is
commercial, so I'd be cautious about building a business around it. With other
languages, it is free to use and deploy as you see fit and some have binary
executables that don't need a runtime. K is very expensive. Dyalog APL is
pretty reasonable, but still costs money to use and royalties in certain
situations.

I think any decent developer can pickup array languages, but many won't want
to as they'd rather have something more mainstream (Java, JS, C++,
Python...etc) on their resume. So it might be difficult to hire.

~~~
aw1621107
> APL/J/K can be very fast for an interpreted language, but they won't beat C,
> C++, Fortran, and Java in most cases.

Do APL/j/k/q work well for some performance-sensitive situations but not
others? Some of the comments here by people who appear to be very familiar
with the language make it seem like it performs quite well compared to the
traditional high-performance languages.

Distribution is definitely something I hadn't considered; I guess I've been
spoiled by the variety of free/open languages available these days.

I've always wanted to try to pick up an array language to expand my horizons a
bit (e.g., that one bit of advice that programmers should learn one of the
four main types of languages --- ALGOL-based, Lisp, functional, array-based),
but I just haven't had the time.

~~~
kick
Worth noting that most APL interpreters and the J interpreter are
significantly slower than (most) k/q interpreters. I'm not sure if the
performance argument is relevant for all of them.

Arthur Whitney's a biased source, of course, but the kparc site has a bunch of
little programs in which he beats the equivalents from rather influential
people (like the UNIX/Plan 9/Inferno authors
[http://www.kparc.com/z/bell.k](http://www.kparc.com/z/bell.k) ) for bragging
rights.

k losing to some really nicely-written C and most Fortran seems plausible,
Java and C++ not so much.

~~~
aw1621107
Is there publicly available information on what makes k/q interpreters so much
faster? Or is that part of the secret sauce that the companies maintaining
those interpreters sell? I assume it's the latter, but hope for the former,
since good descriptions of the technical work that goes into high-performance
systems usually makes for fascinating reading.

------
i_don_t_know
How does error handling work in K? For example, what happens when you try to
read a file that doesn’t exist? How does the read function indicate failure to
the caller? How does the caller check and recover (show an error to the user
and continue with some default values)?

~~~
RodgerTheGreat
If primitives fail, they raise a signal. There is also a primitive for
unconditionally raising a signal. Most flavors of K have a primitive called
"error trap" which captures signals emitted by a function application, should
they occur:

    
    
          f:{11 22 33 x}
          .[f;,1;:]
        0 22
          .[f;,4;:]
        (1;"index")
    

If a signal is not trapped, K pauses execution and opens the debugger, where a
user may interactively inspect the environment, evaluate code, and resume:

    
    
          f 4
        index error
        {11 22 33 x}
         ^
        >  x
        4
        >
    

Overall not that different from try...catch in a garden-variety interpreted
language with a REPL.

~~~
i_don_t_know
Thanks! That's fairly straightforward and, I guess, avoids having to pass a
closure.

------
rubyn00bie
Only because earlier today I so definitely thought about the whole "code is a
liability (debt)" reality we all live in... that is to say, I'm spit-balling
here:

but for once K makes me go "oh shit!" and not because of the craziness of its
syntax... rather, if that syntax is so precise/terse, and even if to the
outsider it looks fucking insane, does it have a wicked lower TCO because you
literally have insanely less lines of liability in your code, libraries,
dependencies, etc?

Is that the reason finance uses it, they maybe empirically know what the rest
of us don't seem to grasp? Or just because it's so domain specific? Or...
something else entirely?

~~~
hmry
The "amount" of code is the same as in any functional language though. It's
just written in a smaller text file.

~~~
chrispsn
Arthur tries to choose primitives carefully so that the amount of steps _is_
smaller than other languages (see the 'product' quote in another comment). So
even if you give each symbol an English name, the code will often end up
smaller.

~~~
wpietri
In which case it seems like two things are being conflated a lot here:
terseness and better primitives. I'm opposed to the first, but very interested
in the second. Is there a language you think focuses on approachability and
readability but keeps the good parts of K?

~~~
chrispsn
k9 (from Shakti) will have optional word equivalents for some symbols (or at
least common compositions) by default.

Have you looked at q?

numpy has a lot of k/APLish primitives built in.

fml was also interesting but never made it past the spec stage:

[https://www.reddit.com/r/apljk/comments/etpbbf/fml_an_optimi...](https://www.reddit.com/r/apljk/comments/etpbbf/fml_an_optimizing_functionoriented_array/)

------
duchenne
> +/!10

I personally find the given example harder to read than the equivalent in
python/matlab/javascript with lodash/probably many other languages

sum(range(10))

sum(1:10)

With "K", you have to know a few new conventions. In the second case you just
have to read, and the data flow is more obvious.

~~~
keymone
As I understand author’s point, your example with “sum” is only valid because
“sum” word has very few meanings, especially in software engineering world.

Even the word “product” would introduce certain ambiguity in context of real
world codebase: is it “math” product or some “domain” product?

It’s even worse with most other words.

So if in any given codebase you still have to know the specific meaning of
every word, how is it different from having to know specific meaning of every
K symbol?

~~~
qayxc
> So if in any given codebase you still have to know the specific meaning of
> every word, how is it different from having to know specific meaning of
> every K symbol?

So you never encountered namespaces then? Because that's why you use
namespaces. The number of symbols is limited, which is the reason why the
whole "a symbol is more explicit"-argument simply doesn't hold in general. It
breaks apart pretty quickly. That's why you'll find the following sentence
pretty much verbatim in most maths and cs publications: "Throughout this
paper, unless otherwise speciﬁed, we use the following notations," followed by
half a page of definitions. Plot twist: different authors often use different
notation even within the same field of research.

~~~
keymone
I didn’t say I buy that argument, it’s just my understanding of it.

------
Beltiras
I went through a similar transformation when I finally _understood_ LiSP. I
knew Python and had programmed in it for several years but it was breaking
into Scheme that changed my approach. I wonder if learning new paradigms
always changes how you use other languages.

~~~
gerikson
According to Peter Norvig, you were half-way to Lisp using Python:

[https://norvig.com/python-lisp.html](https://norvig.com/python-lisp.html)

~~~
kazinator
According to Guy Steele, if you're a C++ programmer using Java, you've been
dragged halfway to Lisp.

[https://people.csail.mit.edu/gregs/ll1-discuss-archive-
html/...](https://people.csail.mit.edu/gregs/ll1-discuss-archive-
html/msg04045.html)

So by that estimate, if you switch to Python you're another halfway to Lisp
(three quarters of the way from C++).

~~~
wglb
I could never figure out how Java is in any sense halfway to Lisp.

Python, maybe.

~~~
lispm
The reference point is C++.

What makes Java nearer to Lisp than C++?

Mostly the Java runtime, the JVM. C++ basically comes with some kind of manual
memory management and raw data layout.

Lisp comes with Garbage Collection and managed memory. In Lisp one passes
references to arrays and other objects around. The runtime tracks which
objects are of what dynamic type and of what size. The garbage collection may
be a simple mark&sweep or a more complex, say, generational GC. The first
garbage collection was developed for Lisp around 1960 and has then slowly
spread to other languages. C++ was object-oriented, but with a more
traditional non-GC approach. Java changed that.

~~~
wglb
Those are certainly huge similarities, but I am wondering if a test of the
veracity of that statement could be tested by how difficult a seasoned Java
programmer feels to learn Lisp compared to how difficult a seasoned C++
programmer feels to learn Lisp.

------
uryga
what is `<<` ("ordinal") supposed to be doing? i understand how `<` works, but
how is it useful to apply it a second time?

the closest i got to finding some pattern is this:

    
    
      >>> xs
      '34210'
      >>> grade(xs)
      [4, 3, 2, 0, 1]
      >>> grade(grade(xs))
      [3, 4, 2, 1, 0] # :O
    

where

    
    
      grade = lambda xs: (
         [i for (i,_) in sorted(enumerate(xs), key=swap)]
      )
      swap = lambda p: (p[1], p[0])
    

no clue what it means though.

(i guess "ordinal" really is too ambiguous...)

 __EDIT __

alright, i see it now - ` <<xs` is "for each item x of xs, where does x land
when you sort xs?

~~~
icen
< grades upwards: for each element, the element of the returned array is its
position if the array is sorted.

    
    
        x: 10?A:"abcdefghijklmnopqrstuvwxyz"
         x
        "ysyselacwl"
         +`x`y`z!(x;<x;<<x)
        x   y z
        --- - -
        "y" 6 8
        "s" 7 5
        "y" 4 9
        "s" 5 6
        "e" 9 2
        "l" 1 3
        "a" 3 0
        "c" 8 1
        "w" 0 7
        "l" 2 4
    

So <x gives a sort index, <<x tells you how far each element is from the
minimum in that sort index

~~~
i_don_t_know
I still don’t understand what the numbers in column y mean. I understand
column z and that’s also what grade up is returning in John Earnest’s oK:

    
    
        x:“baced“
        <x
      1 0 2 4 3
        <<x
      1 0 2 4 3
    

That is, for example, when sorting, the item at position 1 in the original
string (“a”) moves to position 0 (the beginning of the sorted string). So in
the end you get “abcde”.

What does column y mean? If I sort the original with it, I get “wllaysysce”.

Has this changed between versions of K?

~~~
i_don_t_know
Nevermind. I've figured it out with the help of the reference manual at
[http://www.nsl.com/k/k2/k295/kreflite.pdf](http://www.nsl.com/k/k2/k295/kreflite.pdf).
I knew it was a permutation but I applied it incorrectly.

------
Athas
People are so quick to reject K and APL-style languages for superficial
reasons that they never get to the deep and interesting reasons! I am mostly
familiar with APL, but I think the things I appreciate and dislike are about
the same in K.

One interesting philosophical difference, at least among some APL programmers,
is that building abstractions should be avoided. TFA has a hint of that
philosophy, in its suggestion that perhaps _naming_ (the root of abstraction)
hinders clarity. I think this is definitely worth considering, and it doesn't
really have anything to do with the (supposedly) cryptic syntax.

One reason I _don 't_ like K/APL-ish vector programming is performance. I once
spent some time working with others on a GPU-targeting APL compiler, and I
found that some common programming patterns are quite antithetical to good
performance. In particular, the use of "nested arrays" (not the same as
multidimensional arrays) induces pointer structures that are difficult to do
anything with. Such nested arrays are typically necessary when you want to do
the equivalent of a "map" operation that does not apply to each of the scalars
at the bottom, but perhaps merely the rows of a matrix. Thus, control flow and
data are conflated. This is fine conceptually, but makes it difficult to
generate high-performance code.

Another concern is that encoding control flow as data requires a lot of memory
traffic (unless you have a Sufficiently Smart Compiler; much smarter than I
have ever seen). Consider computing the Mandelbrot set, which is essentially a
'while' for each of a bunch of points. An idiomatic APL implementation will
often put the while loop on the _outside_ of the loop over the point array,
which means it will be written to memory for every iteration. In other
languages, it would be more idiomatic to apply the 'while' loop to each point
individually, which will then be able to run entirely in registers. You can
also do that in APL, but it is normally not idiomatic (and sometimes awkward)
to apply complex scalar functions to array elements.

Just look at this Mandelbrot implementation from Dyalog; they do it in exactly
the way I described: [https://www.dyalog.com/blog/2014/08/isolated-mandelbrot-
set-...](https://www.dyalog.com/blog/2014/08/isolated-mandelbrot-set-
explorer/)

Specifically, the conceptual 'while' loop has been turned into an outer 'for'
loop (this is OK because the 'while' loop is always bounded anyway):

    
    
           :For cnt :In 1↓⍳256                      ⍝ loop up to 255 times (the size of our color palette)
               escaped←4<coord×+coord               ⍝ mark those that have escaped the Mandelbrot set
               r[escaped/inds]←cnt                  ⍝ set their index in the color palette
               (inds coord)←(~escaped)∘/¨inds coord ⍝ keep those that have not yet escaped
               :If 0∊⍴inds ⋄ :Leave ⋄ :EndIf        ⍝ anything left to do?
               coord←set[inds]+×⍨coord              ⍝ the core Mandelbrot computation... z←(z*2)+c
           :EndFor

~~~
chrispsn
There's a q Mandelbrot here:

[https://github.com/indiscible/qmandel](https://github.com/indiscible/qmandel)

And Arthur's version in k (also below):

[http://www.kparc.com/z/comp.k](http://www.kparc.com/z/comp.k)

I should convert it to k9 (Shakti)...

    
    
        +/~^+/b*b:49{c+(-/x*x;2**/x)}/c:-1.5 -1+(2*!2#w)%w:10

~~~
Athas
My K is weak, but I _think_ that implementation is also putting the fixed
point iteration on the outside, isn't it?

------
sullyj3
This looks cool, seems like it's proprietary and expensive though. Is there
anything similar that's free?

~~~
sedachv
[https://www.gnu.org/software/apl/](https://www.gnu.org/software/apl/)

Most likely available through your package manager.

~~~
kick
Better:

Free software K interpreter:

[https://bitbucket.org/ngn/k/](https://bitbucket.org/ngn/k/)

Arthur's K interpreter isn't actually that expensive for casual use anymore
(it's free for most non-commercial use and you can download it off the Shakti
website pretty easily), though. Not that I'd recommend it: Arthur's a genius,
but nothing's worth using proprietary software for.

What I usually recommend for learning the paradigm is J, largely because it
has far better resources than anything else in the space. J's labs are
wonderful, and the books Ken Iverson wrote before he died (all available for
free) are great both for learning the language and also for learning many
other things.

GNU APL is nice, and by no means am I trying to diss it: it's just not the
best for learning, in my opinion. Its info pages are really nice though, like
most GNU info pages.

~~~
sedachv
There are a lot of books, tutorials, and papers on APL and APL2, which are
applicable to GNU APL.

~~~
kick
I don't believe so? I started out using APL2; GNU APL seems distinct in ways
that are noticeable. Certainly not in a bad way, but I think enough to cause
confusion for any newcomer reading said books, tutorials and papers.

~~~
sedachv
The system functions and included workspaces are different, but I haven't had
any surprises with the operators so far.

------
smadge
Using / for reduction reminds me of the Bird–Meertens formalism.

------
mhd
The problem is also that you encounter K/APL in these circumstance, hardcore
math functions or ad hoc queries. I'd say that most languages look their worst
in these circumstances, deeply nested for loops for e.g. 3D math in C aren't
exactly instantly clear either.

Once you see e.g. input verification or other more "tedious" parts of
programs, things get decidedly less cryptic.

And less optimized/concise, of course. A bit like the people who complain
about GUI hello world programs taking 10 lines.

------
LessDmesg
I have the same question for K aficionados as I have for Forth ones: what
real-world, human-usable, important software has been written in it? A GUI
framework, a web browser, a window manager, a text editor, etc. Anything?

I'm asking because the article is poking at C-like languages, i.e. implying
that K is good for general-purpose use. Yet all I hear about is that K is good
at multiplying numbers and matrices, which is a pretty limited playground.

~~~
kick
Forth:

OpenFirmware (previously seen in Power Macs, pretty much every Sun device
until Sun died, so on.

Canon Cat (greatest text editor of all time, by Jef Raskin, the guy who made
the Macintosh)

Forth Has Been to Space (multiple times, but who's keeping score?):
[https://www.forth.com/resources/space-
applications/](https://www.forth.com/resources/space-applications/)

OKAD, which was used to create the processor with the lowest power usage per
instruction in the entire world.

k:

Important, no, but a substantial volume of stuff. Text editors, GUIs, window
managers, operating system completely independent of any other, a pretty
important and very expensive database, so on.

k, unlike Forth, came after software was seen as IP. Software as IP has made a
lot of people very angry and been widely regarded as a bad move.

k's ancestor, APL, was used in quite a few things quite elegantly, and while k
is different, it's not different enough to be a different paradigm. Some
examples: first practical electronic mail system, first widely-used electronic
mail system (used for Carter's successful Presidential campaign, for example),
first worldwide computer network, I could go on.

That k hasn't seen as much groundbreaking work done in it is less because of
the language itself and more because of the insane costs, trigger-happy
lawyers of kx, and money.

Of course, when you asked this of Forth the other day, you were trolling (
[https://news.ycombinator.com/item?id=22319154](https://news.ycombinator.com/item?id=22319154)
), so I imagine I might be wasting my time.

~~~
LessDmesg
Sorry, but I'm not the one trolling. Reading up on "Canon Cat", I find

> It had a text-based interface without a mouse, icons, or menus

 _sigh_

The Forth in space thing is just some drivers/firmware for spaceships. I.e.
once again nothing that an ordinary user would care about. The APL part is
devoid of links to real contemporary software. Not even something of Notepad++
quality...

Please don't waste your time as you don't seem to want to understand my
question. Thank you.

------
traitsnspecs
How do people do error handling in K? Or is the data typically well-formed in
real programs? Not bashing anything, I'm just genuinely curious.

------
bori5
“ and you can add 5 to an entire matrix at once as easily as a single number.
Why you would want this property remains elusive-...”. Why would that be
elusive I wonder ? MATLAB has been doing it since the 80s as an example and
that exact elusive feature would have been used countless times.

~~~
andybak
Those words were spoken "in character". The idea was that some readers would
chuckle at the naivety.

~~~
bori5
And many time’s I shake my head at how people miss humor/sarcasm on HN. Got
me.

------
Tomte
I totally buy that this is elegant for small number- or lust-based things, but
since a financial database is named as the flagship product: how does I/O look
like in K? Both user input and disk access. How do you make a user interface?
How do you networking?

Is it still as elegant?

------
msla
My problem with K isn't its syntax, it's its culture: People who are so self-
satisfied with their ability to read their own code they refuse to give
variables meaningful names, aping mathematical tropes they half-understand at
best, because those mathematical equations are only tolerable in the context
of a paper where the rest of the lifting is done by natural language prose.
They eschew comments without realizing that those comments enable terse
expressions by showing you have the basic maturity to express yourself in a
way others can understand.

Knuth tried to move us to that state with Literate Programming. How many K
programmers use Literate Programming? One? Two? Any?

~~~
kaminari
Elsewhere in this comment section, someone mentioned a HFT firm that
extensively used noweb and K.

------
at_a_remove
My hat is off to the people who can handle this kind of thing; I myself
cannot.

~~~
clarry
Do you really think you could not, even after taking a few weeks to study and
practice?

~~~
at_a_remove
Probably not, no.

I could easily see myself becoming dejected after printing out a few cheat
sheets and buying some relevant books. I would have attempted some exercises
and done whatever toy projects with an increasing feeling of dread that I was
simply aping what was before me only to find what I thought I had learned had
slipped away in a weekend. Should I get much further than that, I would then
try to solve an actual problem I had with it and that would be the end of it
as I would stagger into realms of development environments and dependencies to
try to get the right libraries in order, I would look for advice online and
tentatively ask questions only to be blasted for using the _wrong_ dialect of
whatever, or have my questions misconstrued until I could be informed that
what I _really_ have is an XY problem, I don't really want what I asked for
after all. What a relief to me.

At that point I throw in my hat and go back to what I am used to.

I know myself well enough to know that I just haven't the grit or patience to
torment myself with trying to find an application for the befunge du jour and,
as such, learning it becomes a Glass Bead Game for me. If my life depended on
it, perhaps, but this reminds me too much of the "executable line noise" I
escaped: Perl, with a kind of sly terseness I associate with any number of
obfuscations as a kind of puzzlebox monument to the "cleverness" of packing
something into a single line I have had to slam into while programming.

I lack the vim to tilt at these windmills.

~~~
tangentstorm
That's a pretty good set of things to visualize if you want to feel defeated
before you even get started.

I write K for a living, and J for fun. J is WAY crazier, and I learned it
specifically because it was so weird and crazy. I kinda wanted to remember
what it was like to be a beginner again.

In all these languages, you can always fall back to doing things the "old
way"... They all pretty much support structured and functional programming
with loops and functions and whatnot. It's just there are lots and lots of
shortcuts, and you get used to them over time.

(Also for what it's worth, the communities around vector languages do tend to
be pretty friendly and welcoming...)

~~~
clarry
> I write K for a living, and J for fun.

Do you have a clear preference for either? If you could choose J at work,
would you? Would you write K for fun?

> (Also for what it's worth, the communities around vector languages do tend
> to be pretty friendly and welcoming...)

If you've got a site / mailing list / IRC channel in mind, I'd like to know
about it.

------
goto11
Some context often missing form these "code golf" showcases:

Domain specific languages have syntax optimized towards specific tasks. This
often means very compact symbolic expressions which looks completely
inscrutable to outsiders but are highly efficient when you know the language.
String processing in Perl, pointer arithmetic in C, selectors in jQuery.
Nobody can guess what the code does by looking at it, you have to learn the
language. But it is worth the investment when the task in question is often
used.

So the question is, are you working in a domain where the investment in
learning a particular DSL will pay off?

But this context is completely absent in the article. By comparing the K
syntax for "reduce" with a for-loop, it assumed the competition is systems
languages like C or Go where a for-loop is idiomatic. But then the question
is, how often do you use "reduce"-like operations in this domain anyway? It is
really worth optimizing the langauge towards?

If on the other hand the article admitted K is a domain specific language
optimized for numerical processing, then a reasonable comparison would be
against Numpy or Matlab, perhaps Haskell, not for-loops.

~~~
tangentstorm
I don't think he was actually arguing that K is domain-specific, he was just
observing that it seems that way to outsiders.

If anything, the most successful applications seem to be with databases (not
math), but it's also pretty great for interactive graphics:

[https://github.com/JohnEarnest/ok/tree/gh-
pages/ike](https://github.com/JohnEarnest/ok/tree/gh-pages/ike)

I work at the same company as the author, and we use K3 for all kinds of
things -- system administration, network servers, interactive web apps.

It's a pretty general purpose language. And you can even do old-school loops
if you want. It's just very concise.

Also, in K, there's a whole set of one-or-two-character symbols that allow you
to express other uses of loops in traditional languages.

------
rgrs
author ended article with

> Leaning back in your chair, you think to yourself:

> |/

what does this translate into?

~~~
floatingatoll
I reversed each of my list of objections.

------
noobermin
Seeing this article and the top comment, is this that contentious an issue? I
wasn't aware people care _that_ much about Algol vs. APL-ish languages.

------
yesbabyyes
His diff was lame, I'm sure the coworker would have gone along with

    
    
      Math.max(...list);

~~~
tlb
That hack isn't guaranteed to work with long lists, since the standard allows
a fixed limit on argument count. For example, in Node 12:

    
    
      > Math.max(..._.range(0, 1000000))
      Thrown:
      RangeError: Maximum call stack size exceeded

------
muzani
I have fallen in love with this language. How I do replace Java(Script) with
this?

~~~
rscho
I think J is better equipped for web programming.

------
brazzy
Disingenious and unconvincing, like a sales pitch for a product that is
actually bad for you.

Yes, that kind of syntax can be wonderfully terse when doing simple arithmetic
operations on lists of integers.

Most programming work does not consist of doing simple arithmetic operations
on lists of integers.

If you want to convince me that this languagy useable for anything but toy
problems specially chosen to demonstrate the language's strengths, show me how
it can be used to handle an HTTP request with some special cases for certain
headers, or to parse a custom date format with variants, or to implement a
game with realtime user input.

~~~
kick
XML parser:

[https://a.kx.com/a/k/examples/xml.k](https://a.kx.com/a/k/examples/xml.k)

~~~
kick
Ray-tracer:

[http://www.nsl.com/k/ray/raya.k](http://www.nsl.com/k/ray/raya.k)

~~~
kick
Graphical spreadsheet application:

[http://www.nsl.com/papers/spreadsheet.htm](http://www.nsl.com/papers/spreadsheet.htm)

~~~
brazzy
With 5 to 10 lines of comment per one line of code, it starts to become
readable. You're still left constantly wondering "what do a and x mean in this
context, again?"

3/10, would definitely not want to use for any kind of real work.

~~~
kick
John Earnest links to his tutorial in the Stages of Denial post; if you click
it and give it a read, you'll probably understand much easier.

But you shouldn't feel pressured to use k: people who advocate for k play the
role of a Cassandra who can drown her tears in dollar bills.

~~~
brazzy
A Cassandra... or a victim of Stockholm Syndrome.

------
Veedrac

        K       +/!100
        Python  sum(range(100))
        Haskell sum [0..99]
        Rust    (0..100).sum()
        Ruby    (0..99).sum
    

There are plenty of ultra-concise languages for toy problems like these; they
are called ‘golfing languages’. Ultimately it's a bit pointless since most of
the time people aren't playing code golf.

(The issue with setting everything to obscure ultra-compressed keywords isn't
that it's not possible to learn _those keywords_ , but that most of the time
programming is done over specific and customized domains with larger and
problem-specific vocabularies.)

~~~
satokema
The article's point is a meta-point. Congratulations on actually being part of
the meme in the article.

~~~
brazzy
The article's point is dishonest smugassery. Congratulations on convincing
people to avoid K for its community, if not the language itself.

------
theamk
If you are stuck in a boring language and want something more concise, you
don’t have to give up legibility by going to K. There are plenty of languages
where you are not expected to use “for” loops.

Python/numpy is most accessible and probably closest to “instant
gratification”. Mathematica is great for, you know, mathematics, but you’ll
need a license. And there is always haskell.

~~~
kick
k is perfectly readable. That's like, one of the biggest points of the post.
The post mocks the mindset featured within your comment.

None of the languages you listed are quite as readable as k to someone who
knows it.

All of the languages you listed are significantly slower and feature none of
the benefits of a concise notation, even with fast.ai's Python style guide
made to imitate Arthur Whitney.

[http://www.eecg.toronto.edu/~jzhu/csc326/readings/iverson.pd...](http://www.eecg.toronto.edu/~jzhu/csc326/readings/iverson.pdf)

~~~
Cyph0n
> k is perfectly readable [...] to someone who knows it

A great example of a tautology. Every language in existence passes this
test...

The real test is a language that is readable by those who don’t know it.

~~~
kick
If you don't know the language, you won't be writing, editing or improving
code in it anyway. Your test is therefore irrelevant for programming
languages.

~~~
kazinator
If people don't know a language, yet find it readable, they are more likely to
end up getting involved in doing those things.

Poor readability is not a good way to keep people out; you're not selecting
for talent.

~~~
neonate
You're overestimating how readable _any_ programming language is to non-
programmers.

~~~
saagarjha
With decent identifiers you can kind of guess the purpose of things without
understanding anything else about it. A file with words such as "latitude" and
"coordinate" and "distance" in it is probably something to do with maps, for
example.

~~~
neonate
Non-programmers making sense of code in this way is extremely rare. I don't
think this is a priority in programming language design, the same way that
making dashboards accessible to non-pilots is not a priority in airplane
design. That would certainly be a poor reason to sacrifice major improvements
in airplane efficiency or safety. Designing simpler dashboards for student
pilots as a learning device is of course a different matter.

------
hasa
Some people find entertaining from these extremely compact syntax languages. I
have hard time to understand them, but such is life.

~~~
floatingatoll
Regular expressions are a more concise way of writing string matching and
extraction code, often exponentially more efficiently than without. They’re
also usually more compact than spoken English can represent. For example, this
proofreading regex detects possible issues in spelling:

/(?<!c)i(?=e)/

~~~
quelltext
Doesn't that regex detect correct spellings instead of wrong ones? I.e. it
matches tier, tie, etc.

~~~
floatingatoll
Yes. Good eye :) The other comment is also quite accurate: it doesn’t properly
grasp phonetics.

~~~
quelltext
FWIW, I agree with your argument (I guess).

Regular expressions are complex and oftentimes a tool might be necessary to
get a better idea of what an expression will match. However, if we were to
make the language to describe them more verbose that'd still not solve this
issue. Having a concise way to describe them is a good thing. Those who grok
them can do so quickly without having to read several lines of a more
descriptive language and those who need assistance can still plug them into a
tool.

------
starpilot
K is a gift from God. That is all.

------
FeepingCreature
Dlang:

    
    
      list.reduce!max

------
goto11
If you want to sell me a language

do:

\- Explain what particular use cases the language is optimized for

\- Show its advantages compared to the alternatives

\- Tell me honestly what tasks it is not suited for

do not:

\- Use blantant strawmen to make the alternatives look worse.

\- Tell me I am an idiot

~~~
SamReidHughes
Ultimately, you’re the one responsible for your opinions about the language,
not the author.

~~~
randallsquared
It seems like you're inferring a lot which wasn't said, which is quite
consistent of you! :)

------
busterarm
Are reduce and lambdas really that cryptic/complex to people? I think I need
an adult. -.-

~~~
ehsankia
range(100).reduce(plus, 0) isn't cryptic, what's cryptic is single character
symbols. Looking at that line, unless you know K specifically, it's very
difficult to figure out it's a reduce. Where as `sum(range(100))` is pretty
self explanatory to most programmers who don't know python.

~~~
kaminari
I don't know K but "+/" immediately stood out to me as "add over", so I think
they made a good choice in using "/" for fold.

"!", on the other hand...

~~~
qayxc
There's no good (or bad) choices for mapping abstract concepts like ranges,
mapping, reducing, or combining to common (ASCII-) symbols. Every choice is
inherently arbitrary and carries no meaning whatsoever (besides maybe
ergonomics). One could just as well argue that "/" is the ISO 8000-2:2009
2-9.6 recommended symbol for division. The same source also shows why symbols
alone are inadequate for an explicit representation of abstract concepts: the
Nabla symbol alone represents 5 different operations depending on context
according to the ISO standard.

------
kazinator
> _When the entire system fits in a page of code you can understand everything
> about it from the top down._

I'm sorry, but that sounds like the juvenile fantasy of someone who is still
in their programming puberty.

Good luck fitting an operating system, web browser or air traffic control
system into a page of code.

~~~
gallabytes
there almost certainly is an encoding such that every program any human will
ever write fits in 128 bytes, though I doubt we'll ever design one. to
convince yourself of this, notice that you don't expect to ever produce two
programs with the same blake2 hash.

there's a lot of room for improvement in conciseness of code. I would still be
surprised if it was meaningfully possible to write a full-featured modern OS
with one page of APL

~~~
Dylan16807
I think you'd have to write the "Do what I mean." interpreter before you have
any chance of making programs that compact.

------
sklivvz1971
The irony of claiming to represent a readable language, that others are in
denial of reality about this, and presenting your thesis on a '90s looking
website with typographic lines spanning almost 400 chars at 100% width...

ABTASTTSBMR than the whole sentence. OK.

~~~
neonate
That objection seems trivial.

~~~
sklivvz1971
The whole article is trivial and trite, yet dressed up as something novel.

~~~
neonate
I don't think so. Even if that's true though, you could do better.

------
tus88
Is this C in Russia?

------
mthwsjc_
what is this orange website the post mentions?

~~~
vermilingua
Let me think.........

~~~
LandR
He might not be joking, I've always browsed HN with dark mode enabled. For me
it's a light and dark grey website :)

------
beiller
The javascript example of list.reduce((x,y)=>Math.max(x,y)) should have been
just Math.max(...list)

~~~
saagarjha
Suggested already, with a rebuttal:
[https://news.ycombinator.com/item?id=22523151](https://news.ycombinator.com/item?id=22523151)

------
ricardobeat
The name (and editor extensions) are taken by another 'K framework':
[http://www.kframework.org/index.php/Main_Page](http://www.kframework.org/index.php/Main_Page)

~~~
kick
k spawned in 1992. When did this framework come about?

