
Loopless Programming - codesections
https://code.jsoftware.com/wiki/Vocabulary/Loopless
======
jameshart
Most programmers have worked in a largely loopless programming language: SQL.
IT lets you easily build the same sort of ‘Boolean state for every item’ as
this discusses, but it requires you to be much more explicit about which items
you want to line up next to one another if you’re joining two lists together.

In modern languages we usually just use mapreduce like functional approaches
to handle the same sort of thing, so I’m not sure this approach is as unusual
as the author seems to think.

~~~
7thaccount
From someone who has experience in mostly SQL, Python, Linux commands, and a
smattering of other languages that I've played with (Haskell, Common Lisp, F#,
Ada, Prolog, Julia, Forth, Fortran...etc) I can say that J and modern APL
systems are completely different in a lot of ways. Yes functional languages
have a lot built around map/fold/filter that is very similar to J's tacit
programming, but the implementation makes all the difference. With J you just
have math basically (assuming you don't use the non idiomatic if-then
constructs). I'm not saying J is better than Haskell at all, but keep in mind
that when they say "loopless" it's possible that what they're trying to convey
is that there are no imperative loops AND the combination of dynamic typing
and tacit trains means you have something terse with zero boiler plate.

~~~
jandrese
It seems to me like it is terse because this is a DSL for dealing with vectors
and arrays of numbers.

Sure the equivalent C for loops are a lot more wordy for the problems that
this language solves, but you'll drive yourself crazy trying to write a simple
event loop for your GUI in J.

~~~
tluyben2
You do everything differently in j/k/apl, so the notion of eventloop is not
what you want to do there (under the hood that happens ofcourse). People would
not typically write GUIs in these languages but you can [0]. They are actual
languages, not DSLs.

[0]
[https://www.jsoftware.com/help/primer/gui.htm](https://www.jsoftware.com/help/primer/gui.htm)

------
NOGDP
> Looping - performing a computation repeatedly - is what programs do. In most
> computer languages, all loops are expressed by one of two statements:

> Do While - repeat a code block until a condition is met

> For - repeat a code block, with a loop index indicating how many times the
> block has been repeated.

> Programmers trained on scalar languages have spent many years internalizing
> the Do-While and For paradigms. Discarding these paradigms is the biggest
> re-think they need to make when they learn J.

A lot of languages have these kinds of semantics and arguably in a more
streamlined / better organised way - Haskell typeclasses for example. I don't
know if it's true that 'most languages' only support for/while loops, but
probably not. It also doesn't take 'many years' to internalise for/while
loops, these are fairly basic constructs which are learned by most novices in
the beginning of an introductory programming course.

> (x + y) is an expression rather than a statement. The J programmer can embed
> (x + y) in a larger expression, perhaps a matrix multiplication (w +/ . * (x
> + y)) which adds the equivalent of three more nested loops, but is still a
> single expression. Expressions can be combined; statements cannot.

(x + y) is going to be an expression in almost any language...

> Many modern languages have iterators, which slightly streamline the For
> loop, but without addressing its fundamental deficiencies.

What are the 'fundamental deficiencies'?

This seems like a pretty low-effort write-up.

~~~
kragen
It's disappointing to see someone aggressively defending their ignorance like
this. It's okay if you don't understand what the article is saying, but it's
unfortunate that you are blaming that on the article.

Maybe your time would be better spent solving some coding challenges in J or
another array language like Dyalog APL, Kona, or A+, then comparing your
solutions with others’ using the same language, rather than posting voluminous
comments about how your lack of understanding means there's nothing to
understand. It probably won't change your life, but it might be a useful tool
in your mental toolbox when you're using more mainstream languages like R,
SQL, or Octave; libraries like TensorFlow, Numpy, Pandas, PyTorch, or parts of
Boost; or hardware like GPUs or processors with NEON and SSE.

(Related: Blub.)

~~~
dang
I agree with you on the fundamentals but that first paragraph isn't helpful.
The second paragraph is fine.

I suffer from the same annoyance about the same thing, so I know where you're
coming from; I still do it myself and ought to moderate my own comments when I
do. But blunt statements about others' ignorance only lead to pain instead of
pleasure, and drive people further away. Not only does accuracy in such blunt
statements not help, it makes their effects considerably worse. One of my
teachers had a Ph.D. in psychology from Stanford and told me that he learned
one thing from his Ph.D.: punishment is not conducive to learning.

In case it helps with bona fides, we're somewhat licentious about bending the
rules in favor of APL and its family on HN. Of the great alternative
programming universes (Lisp, Forth, Prolog, ?) it is surely the least
understood.

~~~
kragen
I appreciate the feedback! I'm not a user of J or APL in any regular way, but
I feel that there's something going on there that I don't understand (and, due
to spreadsheets serving APL's 1970s constituency better than APL did, perhaps
nobody ever will). I suspect that J itself is fatally flawed, but I don't
understand its merits well enough to be sure how to fix it.

My frustration comes from the quality of the discourse rather than from some
feeling that my favorite language is being slighted. Perhaps it's unfair of me
to be so demanding of others when I'm so often stubbornly ignorant myself, but
I would like people to _just not post_ middlebrow dismissals of this sort (and
to tell me when I'm doing something similar); they make rational discussion
impractically difficult to find in the interstices between the aggressive
posturing. (As inimino said,
[https://news.ycombinator.com/item?id=21302174](https://news.ycombinator.com/item?id=21302174)
"This is why we can't have nice discussions about programming languages." I
believe that's actually true.) Can you imagine biologists attempting to
discuss the evidence about the evolution of a particular signaling pathway at
a conference where Creationists are shouting that evolution doesn't create new
species?

Clearly the folks voting on HN have different preferences.

I want to emphasize that it's not the poster's cluelessness that I'm
criticizing. Everyone starts out clueless about everything and stays that way
about most things; there is nothing wrong with that. Nor is it their
willingness to spout off about their cluelessness — expressing your
misconceptions is very often the quickest way to get people to correct them,
which happened in this case; both inimino and I wasted our time laying some
deep knowledge on the dude. Rather, it's their _persistent refusal to notice
their own lack of understanding_ — and the public approbation of that refusal
— that I think poisons the well of rational discourse. That's what reminded me
of my encounter with Kent Hovind on the streets of Berkeley.

Think, by contrast, of Leibniz's ideal: _Quando orientur controversiae, non
magis disputatione opus erit inter duos philosophos, quam inter duos
computistas. Sufficiet enim calamos in manus sumere sedereque ad abacos, et
sibi mutuo (accito si placet amico) dicere: calculemus._ Think of the pleasant
collegiality that in fact exists among mathematicians, where a few minutes of
discussion commonly suffices to convert an opponent into an ally, and the
youngest and least experienced can point out an error made by the most
respected with the full expectation that their correction — if correct — will
be gratefully accepted. In this case, at the other extreme, we have a
controversy which is easily resolved with three minutes watching YouTube, but
which instead spawned a long thread of aggressive comments, complete with
boasts about how easy it is to understand while loops. This is the kind of
thing that led to JWZ's famous gibe.

How can we foster an environment more like Leibniz's ideal, if not by pointing
out the most egregious discrepancies in behavior? Perhaps my arrogance is not
the way — this is IIRC why you didn't want to work with me at Skysheet — but
what is?

~~~
inimino
At the risk of taking this too meta... there's something interesting here
about online communities like HN.

In everyday human interactions, we have a lot of contextual and social cues
about who is more experienced, respected, expert, even who is older; in short,
all kinds of social hierarchical information that guides and constrains our
behavior. Online, especially in a community of sufficient size that most
interactions are between strangers, as here, we lack these cues and must
approach each other as equals, or at least as unknowns. It's very
democratizing, and sometimes it's incredibly aggravating.

There are things a professor can say to a student that a student can't say to
a professor. The professor can police students' behavior and ask a student to
leave. So we would not expect this kind of "aggressive defense of ignorance"
in a classroom, because there is someone there who is responsible for creating
a different environment, as part of an effective, millenia-old tradition of
inquiry.

The zen master can whack the novice with a stick. Sometimes leading to
enlightenment, sometimes just to the master getting some peace.

As with the professor, it's the relationship and the environment that makes
this acceptable. You won't get far by telling strangers on the street that
they are woefully uninformed, or by whacking them with a stick. Online, we're
essentially all strangers on the street.

Certain basic life skills and attitudes can be communicated and acquired
almost immediately when you can get whacked with a stick that can hardly be
acquired online at all. Online communities seem to be a terrible place to
learn humility, for example.

In response to these realities of online engagement, there are community-level
and individual-level approaches. Leaving the former aside, as an individual
there are two things I find helpful, one is detachment and the other is to
remember the audience.

The "wrong on the internet" compulsion maybe comes primarily from expectations
developed offline, where we can talk sense into people, generally within some
institution that facilitates this (school, church, work, etc) and which
generally doesn't exist online. This can be annoying. Perhaps it's this
annoyance that largely leads to endless online arguments. Sometimes you want
to communicate to someone, gently but unmistakeably, that you know more about
the topic at hand than they are likely to learn in the next ten years. This is
likely something that would be communicated _automatically and invisibly_ by
environment and context offline, but is almost impossible to communicate at
all online. Online, the distinguished biologist and the earnest creationist
high school student appear to have equal weight, especially to the high school
student. Sometimes you feel the need to communicate to someone that their
ignorance of the topic is matched only by their ignorance of their own
ignorance, but you can't. You wouldn't be heard, the environment doesn't
support it, and anyway it looks bad. If we're all as equals, or at least
unknowns, the next person who copies your strong style, more likely than not,
will lack the experience that justifies it.

Remembering the audience means that any interaction in a public forum is more
likely to influence bystanders than the one directly addressed. It's like a
debate, in which debaters address each other but actually aim to persuade the
audience. Unfortunately this usually means it's all rhetoric and favors
shallow attention-getting over deep discussion and exploration, but that's
another topic. Anyway, you can reply to the person but aim more to persuade
the audience, which in general is bigger and more likely to be swayed by your
reasons than someone who is already arguing against them.

Beyond promoting your position, trying to encourage better discourse, from the
audience, rather than the person addressed, seems to help.

~~~
kragen
Those are wonderful points, and I really appreciate them. I am persuaded that
it is important to set a good example of behavior for the kids. (And, perhaps,
for myself when I'm talking about things I'm even more ignorant of than I am
about J; an example is linked below.)

There is _some_ amount of "social hierarchical information" available, if you
look carefully — the original poster in this thread has "karma" of 47, while
you have 2957, I have 12549, and dang has 52985, plus 29993 as gruseom,
although that's less visible. But of course most of the people at the Hackers
Conference don't have HN accounts on here at all, so this is at best a poor
guide; and, even for those hackers with accounts on the site, surely it would
be a grave error to consider me senior to, say, lutusp, lispm, davewiner,
Arnt, tonyg, kens, masswerk, or DonHopkins, simply because my account has
higher karma. An even more extreme example is my friend johncowan, who has 6
karma and is one of the major authors of R7RS.

High karma is perhaps more an indicator of the kind of poor impulse control
that results in wasting our time trying to educate the deliberately clueless,
or in my case just going off half-cocked on topics I don't know enough about,
than of actual seniority. All of the people in that list are more accomplished
hackers than I am, but they have less karma in large part because they post
less, perhaps because they're hacking.

To some extent, spelling, vocabulary, and punctuation are similar signals, but
consider
[https://news.ycombinator.com/item?id=20404735](https://news.ycombinator.com/item?id=20404735),
written by someone who apparently really knew what they were talking about, in
depth, in a way that I absolutely did not — "Lol" or no "Lol". And even at
best those indicators only serve to indicate social background and literacy,
which are only weakly correlated with competence.

I wonder if there is at least something we could do to make voting more
thoughtful; for example, put the voting arrows at the end of the comment
rather than its beginning, or even after all the replies to the comment.
(People could collapse the replies to find the arrows if they were really
determined to vote without looking at the responses.) Or use a PageRank-style
or Advogato-style trust metric rather than raw vote count, so that the votes
of people like the ones I listed above would count for more; or, like
lobste.rs, request a reason for downvoting. Fundamentally, though, I think
there's a kind of insuperable conflict between thoughtful discussion and hair-
trigger interactivity. Long comments rarely get many votes, either up or down,
because they take too long to read.

I'm really sick of seeing thoughtful comments like vkou's in
[https://news.ycombinator.com/item?id=20395050](https://news.ycombinator.com/item?id=20395050),
mine in
[https://news.ycombinator.com/item?id=20276994](https://news.ycombinator.com/item?id=20276994),
and eloff's in
[https://news.ycombinator.com/item?id=20275006](https://news.ycombinator.com/item?id=20275006)
(although it was mistaken) punished by downvotes and even flags like we're
filthy spammers.

Again, I really appreciate your thoughtful reply. Maybe we should set up an
"Old Hats" mailing list or something for discussions like these. Maybe it's
possible to rescue HN from the "finance-obsessed man-children and brogrammers"
JWZ refers to.

~~~
dang
Karma is mostly an indication of how much time people have spent posting to
the site.

I agree with you about some of those examples and have reset the score on
them; we do that routinely when we see good comments unfairly downvoted. So do
a lot of users, while the upvote window is still open. The corrective upvote
is a standard practice here. And yes, that still leaves some good comments in
negative space. Voting is a big messy statistical cloud. I don't think there's
a way to make it precise. Maybe it's worth noting that the examples you cited
are already months old? There have been about a million comments posted to HN
since those. It's inevitable that a sample that large will contain some shitty
outliers, i.e. really unjust cases. Comments tend to fluctuate up and down in
score; some are going to end up in the red just stochastically. Perhaps we
should be more open to experimenting with the voting system, but years of
looking closely at that data has diminished my sense of what's possible. I
think the two biggest factors are human nature and randomness, and we can't do
much about either. There could still be better mechanisms for channeling them,
though.

~~~
kragen
I'm glad you're here too.

------
unnouinceput
This reminds me of regex, and the struggle 2 months later to remember wtf I
was trying to do there. Easy to write, hard to maintain, so thank you but no
thank you, I prefer the loops instead even if it means going down at
individual elements in a list/matrix. For any meaningful project maintenance
is the threshold that will make it or break it.

~~~
coldtea
> _I prefer the loops instead even if it means going down at individual
> elements in a list /matrix._

Actually you have it backwards, probably because you confused J's somewhat
cryptic notation as the only way to implement the same concept.

It's the for loops, with their sprawl of non-declarative code, that would be
equivalent to the regex opaqueness, and a well named operator or function to
achieve the same thing that would be more like J.

E.g. would you rather use a:

    
    
      sort(myList, order=desc) 
    

or write your own sorting with for loops?

Similarly, think of operations like:

    
    
      findElement(myList, predicate)
    
      filterElements(myList, predicate)
    
      keepElements(myList, predicate)
    
      forEach(myList, myFunc)
    
      forEachParallel(myList, myFunc, chunks=5)
    

In other words, reduce, map, and specialized versions of them.

If 10-20 such language provided functions covered all cases, I'd use them over
for loops all the time. In fact that's how people use e.g. lodash.

What J adds is a succinct syntax to write and compose such primitives at the
language level.

But such a syntax is not necessary to achieve the same concept (although not
the same brevity/expressiveness) and be better/more readable than for loops...

P.S And regexes themselves should be compared to the equivalent parsing code
-- which could require a FSM, or an ad-hoc buggy implementation of the same
checks/captures spanning 10s or 100s of lines depending on the regex. Except
if you just use a regex for very simple things where e.g. a "contains"
function or some splitting etc will be simpler.

~~~
tluyben2
I have a substantial library with those functions for C#. Why not use Linq?
Because I see people struggle with it (usually external clients who license
our code and outsource development); when they do, they resort to loops. So I
wrote these simpler functions which are implemented mostly with Linq but they
keep people who do not understand linq (again, yes, they are there and rather
a lot of them) from messing up the codebase. Advantage is that these functions
translate to any other language, while Linq doesn’t, so we have the same
(partially implemented) semantics for the same things over a few languages we
use.

------
pjungwir
I still remember as a kid back in the 80s when a friend and I were making
BASIC games on our Tandy 1000s with just the reference manual that came with
the computer, and he was trying to explain to me what a FOR loop was, and I
was _not getting it_. "Why would you want to do the same thing twice?"

Just this morning my own 10-year-old was looking over my shoulder while I was
debugging some C, and I explained for loops to him. Since he plays violin I
said it was like a repeat in music, and he seemed to grok that right away.

My other memory of those days is, after years of dismissing GOSUB as useless
("Why would I want to go back to the same place I just left?"), finally having
a flash of enlightenment and getting the point. It's a function call! (Not
that I knew what those were....)

Sorry this has nothing to do with the wiki page. :-) Except maybe that to at
least one kid loopy programming was unnatural.

~~~
tluyben2
Ha! Yes, I only had the reference manual and some (published by photocopying
matrix printed writing) magazines to work with begin 80s as a kid and I
remember making goto’s jump forward and then when done backward; some older
guy at a meetup (I was the youngest for many years there) told me to check out
gosub; that suddenly clicked after that. At first I thought they were just
inconvenient goto’s.

I guess my relationship between that time and j/k is that I really like the
concept of just needing a (printed!) reference manual and being able to do
everything you need. It is so liberating (for me anyway); it is also the
reason why I like embedded asm/c programming: mostly I need nothing but the
reference manual (and after all these years, not that even). To me it makes
other types of work (web frontend/backend or native apps) with all their
(unstable) libs tedious to work with; I am good at those (esp native apps) but
I don’t really enjoy it as much.

------
jadbox
As far as explicit loops, I probably use write out a loop once a month and
maybe not even that often. Using map/filter/reduce [as well as sugar funcs
until/any/all] solves virtually all the common cases of working with lists.
Granting it's not sufficient if you're writing specialized code like sorting
arrays efficiently, but for general development, going higher-order is the way
to go.

~~~
agumonkey
You can say this now but not long ago map/filter/reduce were almost esoteric.

~~~
whateveracct
I'd say even just 5 years ago they were weird FP features, with LINQ being the
most mainstream version. Now Go is the only mainstream language without them
hehe.

~~~
suzuki
You can write your own LINQ in Go with a very short program. See
[https://github.com/nukata/linq-in-go](https://github.com/nukata/linq-in-go)
for example.

Here is a self-contained excerpt:

    
    
      type Any = interface{}
    
      type Enumerator func(yield func(element Any))
    
      // Select creates an Enumerator which applies f to each of elements.
      func (loop Enumerator) Select(f func(Any) Any) Enumerator {
        return func(yield func(Any)) {
          loop(func(element Any) {
            value := f(element)
            yield(value)
          })
        }
      }
    
      // Range creates an Enumerator which counts from start
      // up to start + count - 1.
      func Range(start, count int) Enumerator {
        end := start + count
        return func(yield func(Any)) {
          for i := start; i < end; i++ {
            yield(i)
          }
        }
      }
    

Now you can write the following:

    
    
      squares := Range(1, 10).Select(func(x Any) Any { return x.(int) * x.(int) })
      squares(func(num Any) {
        Println(num)
      })
      // Output:
      // 1
      // 4
      // 9
      // 16
      // 25
      // 36
      // 49
      // 64
      // 81
      // 100
    

I'd say it is so elegant in Go!

~~~
jadbox
It's a little annoying though to be forced to use interface{} as type
parameters for higher order list operations as there's a performance overhead
when using it. Hopefully this is remedied when Go gets type generic support.

~~~
whateveracct
Not to mention the complete lack of type safety (there are unsafe-style casts
even in the example code.)

~~~
suzuki
Good point! I'm looking forward to generics in Go.

------
Athas
Last time I did APL programming I was a bit bothered by the performance
implications of some of the standard loopless programming styles. In
particular, it's hard to nest loops.

As an example of the implications, consider computing the Mandelbrot set. I'll
be using Numpy here to ensure people can follow what I'm doing, but for the
point I wish to make, it's similar to how you'd write it in APL. The
Mandelbrot set is compute by applying a function like this to each of a bunch
of complex numbers:

    
    
      def divergence(c, d):
        i = 0
        z = c
        while i < d and dot(z) < 4.0:
          z = c + z * z
          i = i + 1
        return i
    

To apply this to many points simultaneously in a vectorised "loopless" style,
we'd write it like this:

    
    
      def mandelbrot_numpy(c, d):
          output = np.zeros(c.shape)
          z = np.zeros(c.shape, np.complex32)
          for it in range(d):
              notdone =
                np.less(z.real*z.real + z.imag*z.imag,
                        4.0)
              output[notdone] = it
              z[notdone] = z[notdone]**2 + c[notdone]
          return output
    

There is just one `for` loop, which is pretty easy to do in APL. The `while`
loop has been subsumed into control flow encoded in boolean arrays. This is
not _exactly_ how you'd do in APL, but it has a similar feel. It's also pretty
slow, because we are manifesting the entire `z` array in memory for every
iteration in the outer loop. In contrast, an old school loop over every point,
with an inner while loop for every point, would involve only two memory
accesses per point. On a GPU, I have measured the vectorised style to be about
30x slower than one with a conventional `while` loop.

~~~
regularfry
_In theory_ you could escape that by using generators rather than explicitly
creating the intermediate list. No idea if J does that under the covers
though.

------
JesseAldridge
> add each number in a list x to each number in the corresponding row of a
> two-dimensional array y

I mean, I could write a function `add(x, y)` that does that in any language.
You could even inspect the data or type to make it polymorphic. I believe
NumPy does this, for example.

Skimming the rest of the article, I don't get how this is different from any
other language. The primary difference seems to be the function names are all
one or two characters long for some reason. I must be missing something...

~~~
plorkyeran
The fact that they're built into the language is in fact significant. Yes, you
obviously could implement J's set of primitives in nearly any language and
then use those. In practice though, you won't, and if you did then many of the
people reading the code wouldn't understand it very well.

Imagine if lodash was built into JS, and optimized by the implementations to
the point where it was faster than not using it. Idiomatic JS would look very
different even though it hasn't made anything possible that wasn't previously
possible.

------
johnday
Almost any functional programming language has the same power and quality of
life in programming, except with the added bonus that things have _names_.

If you come up with a new looping construct (which you probably won't), just
create a type class for it and you're done; any data structure you want now
has that construct, if you give it a sensible implementation.

It would be very unusual to write a Haskell program with explicit loops in it.
The closest you ever get would be something like `forM_` and even that doesn't
really count. I suppose a more direct example would be a recursive function
call, but imperative programmers may be surprised at just how few of these
actually crop up in production code.

~~~
whateveracct
And even when recursion does show up, the return value tends to "read" better.
Like "the sum is the head of the list plus the sum of the tail." Whereas with
an imperative loop, I have to either recognize an idiom or manually execute
the loop in my brain's VM.

------
japanoise
J is one of the strangest programming languages I've ever tried. I would
reccomend giving it a go, though I wouldn't personally write actual software
in it.

~~~
bbanyc
J is APL in ASCII. My first PC was a hand-me-down from a family member who had
installed APL on it and put a bunch of stickers on the keyboard for typing all
the strange characters.

Where I work, we have some production code in another APL-family language, q,
which is the main interface to kdb - a remarkably fast time-series database in
a remarkably tiny binary, with a remarkably expensive price tag.

I think APL and its kin disprove the "Blub Paradox." Here's a language that in
certain aspects is far more powerful even than Lisp, and in others falls down
on tasks that are trivially simple elsewhere. And that's fine. Horses for
courses.

------
defanor
See also: array programming [0] (apparently 1970s APL, on which J is based,
didn't even have explicit for/while loops).

Nowadays even common imperative languages have higher-order functions, so I
don't find it very exciting. Though still quite handy in R and similar
languages where you deal with array-like structures almost all the time.

[0]
[https://en.wikipedia.org/wiki/Array_programming](https://en.wikipedia.org/wiki/Array_programming)

------
henrikschroder
In C#/.Net there's language constructs like LINQ, and framework methods on
generic collections like Select() and Where() and SkipWhile(), etc.

For example, if you want to write code that returns the first X items in an
array that satisfy a predicate P, it will look a lot neater and readable using
code like that, than if you were to write it with for loops and if statements.

But it's still just syntactic sugar, the neater code just masks the underlying
code that contains the actual loops and conditionals.

As always, it's a tool, and can be misused like all tools. It's a balance, the
neater code might be more readable, but the for/if code might be easier to
change or optimize down the line if conditions change. It all depends. There
are no silver bullets, just tools, and trying to minimize the amount of tools
in your toolbox is just dumb.

~~~
Gibbon1
> But it's still just syntactic sugar, the neater code just masks the
> underlying code that contains the actual loops and conditionals.

It seemed to me that was often more performant than using a loop.

~~~
henrikschroder
It depends on what you do, and the underlying code can be pretty smart. But it
can also be pretty dumb.

The main difference is that if you use .Where(P).Take(X), you're actually
generating a new enumerator, a new coroutine, and that code doesn't get called
until you actually enumerate it.

But if you write your for/if loop, you run the code immediately, you go
through the source array, run the predicate on each item, until you have your
X items, and then you return a new array with those elements.

------
stewbrew
2 questions:

\- Doesn't "loopless" actually mean "implicit looping"? (unless you assume
infinite parallelization)

\- How do you debug a chain of "loopless" functions when some corner case
invalidates your assumptions?

~~~
brudgers
1\. The article describes different types of loops. If you buy that, then
"looping" can mean several similar but distinct things. To me, the big
distinction is whether or not we "know" how long the input is. Or more
broadly, whether or not we can assert that the input terminates.

If we can assert that that the input terminates then we can use "for i; while
i < input.length" logic. If we can't assert input termination then we have to
use "while input.next != EOF" logic and we might wind up calling "input.next"
forever (but the system will probably crash first).

2\. If the second form of looping -- calling input.next -- is like what you
mean by "loopless" then debugging corner cases means something like looking at
crash logs. But debugging any crash means looking at crash logs.

Or sometimes it means just restarting the system. Which is what a lot of
debugging looks like at the interesting scale of widely distributed
significantly concurrent computation.

3\. For problems of interesting size, assumptions about the data can be made
based on statistical analysis of the input. Then corner cases become
statistical anomalies that may or may not be worth engineering against.
Systems crashing are as inevitable as off by one errors.

------
stabbles
C++ devs are encouraged to do loopless programming by using algorithms in the
standard library; plus C++20 ranges should improve the ease of loopless
programming.

Where J shines is array based programming. The C++ algorithms are concerned
with vectors, but in J you can combine matrices, vectors and scalars in
concise expressions.

~~~
blt
The C++ <algorithm> library has a huge limitation: algorithms return a single
iterator that cannot be further consumed by other algorithms. This makes it
impossible to chain algorithms in a manner like LINQ. The ranges library will
make a dramatic difference for this programming style.

------
jonny383
This article doesn't actually list any self-claimed "fundamental
deficiencies". There's a reason that C (any many other languages) have been
using pre and post conditional loops for over fifty years - because it's what
the machine actually does, and they are straight forward to follow.

The article mentions "A C programmer would write", and then shows two nested
for loops. This is almost correct, but I think any good C programmer would
write this once as a method or a macro, and then calling it is as simple x +
y, without the snake-oil.

In either case, both are going to execute something like

.loop CMP r0, r1 JNZ .end_of_loop ADD r2, r1 ; move some memory around JMP
.loop .end_of_loop ;

And what does that look like... oh wow it's a loop.

The J language was written in 1990. It's now 2019. The language is just wrong.

~~~
thenewnewguy
I have no real comment on C vs J in this instance, but I find it ironic these
two statements appear in the same comment:

> There's a reason that C (any many other languages) have been using pre and
> post conditional loops for over fifty years

> The J language was written in 1990. It's now 2019. The language is just
> wrong.

Is some practice being old good or bad?

~~~
jonny383
It's highlighting that C (and related languages) have been using loops for
over fifty years, and that J has been around for 29 years with no marketshare.
Therefore, it's safe to assume that A) C is correct, and people use loops for
a reason, and B) J is wrong.

~~~
inimino
> safe to assume

Assuming that because something is popular it must be good is an example of
assuming that things are as they should be, or deriving ought from is[1].
Assuming that because something is unpopular, it must be "wrong" is worse.

Do you make all your technical decisions based on what is most popular? Do you
always assume that whatever has not become popular must be "wrong"?

[1]:
[https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem](https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem)

~~~
jonny383
Except I'm not assuming it's wrong because it's unpopular. Read the original
comment, I clearly stated why it's wrong. My last comment was giving you a
possible explanation as to why the witnessed behavior has occurred.

Also, please show me one successful software package (and by successful, I
mean largely consumed by consumers and a product leader) that does not use
looping techniques.

Like all technical decisions, they are made by using the most correct and
appropriate choice. Which is why the industry has been using pre and post
condition loops since the beginning. This "don't use loops" attitude is
equivalent to trying to reinvent the wheel.

~~~
thenewnewguy
> Except I'm not assuming it's wrong because it's unpopular. Read the original
> comment, I clearly stated why it's wrong.

Except you clearly are. The only other reason stated in the original post
seems to be "because it's similar to what the hardware does", which in my
opinion is completely irrelevant - how does that in any way change how useful
it is to program with?

> Also, please show me one successful software package (and by successful, I
> mean largely consumed by consumers and a product leader) that does not use
> looping techniques.

Must functional programming languages have no loop construct or using loops is
heavily discouraged.

> Like all technical decisions, they are made by using the most correct and
> appropriate choice.

I don't believe that you're naive enough to think that one decision is correct
in _all_ scenarios, or that programming paradigms cannot evolve over time.

> Which is why the industry has been using pre and post condition loops since
> the beginning.

Congrats, we're back to "it's popular so it's good".

------
hinkley
I recall another language that was trying to do something similar. In most
languages you have to decide about x:1 versus x:* relationships very early and
it’s a painful refactor to go back and fix it. This one tried to fix it as
well so that much of the time you only changed the variable declaration and
the code just worked.

jQuery made a similar observation, and many of its functions are list
comprehensions. If they had chosen a different default behavior for empty
lists I might still be championing it today. But the silent failures were a
bitter pill to swallow.

In fact I sometimes still fantasize about building a mini front end framework
containing the inverse behavior and some other more modern ideas.

~~~
k__
What's the inverse behavior?

~~~
hinkley
Ah, yes.

I meant fail on an operation on an empty set. The number of times that I’ve
had a potentially empty set is a fraction of all situations. Often a select
all/select none situation, and adding a flag for silent failure in that case
is cheap compared to all of the other bugs I’ve had to fix.

------
lsb
You see some of this with Numpy, where a +, for instance, has been overridden
to mean addition of n dimensional arrays, and Numpy is foundational for a lot
of the data science programming these days

------
ncmncm
Tl;dr : APL has implicit loops because almost everything is an array, and
mentioning an array implies a loop to process it.

In C++, nowadays, people are encouraged to use Standard Library algorithms in
preference to most loops; and to make an algorithm out of any loop that can't
be cleanly replaced with a Standard one, and call that. The reasoning is
similar to that explained in the article, except that saying which you are
doing, by naming the algorithm, communicates better what you are trying to
achieve than does coding the loop in place.

In APL, of course, naming anything with more than two or three letters, or
wrapping a one- or two-operator expression in a function, gets you looked at
funny, so that reason wouldn't apply.

~~~
7thaccount
From what I've read, your last paragraph is true, but only because the naming
often isn't necessary. The importance of abstraction in a codebase that is 60
pages of code is high, but it fades away if the equivalent J program can be 5
pages of expressions. Instead of defining some numerical operation that is
only 5 characters long, just use the 5 characters in the 2 to 3 places where
it is needed in your 5 pages of code. This way everything is explicit and you
don't have to reference back to the definition. This would be an awful idea in
Java, but it works great in J and APL. Aaron Hsu talks about this with his
parallel APL compiler and refers to this as working at the macro level. Note
that he has plenty of named items that are passed around in a data flow
fashion, but he doesn't need to name every little function.

~~~
ncmncm
The naming is never necessary, in any language. Even where a function is
called, the name may be uninformative.

A name is an opportunity to provide useful information about intent to the
reader. Open-coding, however concise it may be, fails to communicate intent.

------
m0zg
Loops of known size is the one of the best hints you can give to your compiler
though. If the computation inside the loop isn't too complicated, the compiler
will unroll these, often resulting in amazing speedups. In fact if you know
that your array is pretty long but don't know its size, it's often worthwhile
to structure the computation as a nested loop, where the inner loop processes
chunks of known length (which should be fairly short, and picked to utilize
the widest SIMD on your target CPU, e.g. 8 for fp32 on most Intel chips). Then
when you run out of chunks you "mop up" at the end. Furthermore, if your chunk
size is a power of 2 (which it should be), don't be afraid of the modulo
operator (which is normally very slow): the compiler will automatically apply
the x % chunk_size == x & (chunk_size - 1) trick for you. All of the above
obviously only works if chunk size is known at compile time.

~~~
roelschroeven
Good modern compilers do all of that for you, even for unknown sizes. For
example, see how clang unloops (and vectorizes) a simple loop:
[https://godbolt.org/z/NPfnhb](https://godbolt.org/z/NPfnhb)

~~~
m0zg
Yes, in trivial cases like this one, the compiler will generate assembly
that's hard to improve upon. But if there were more meat on the bones there
(as there is, most of the time, if you aren't just summing ints), this
technique could help. As always, benchmark before and after and keep the best
variant.

------
jancsika
> In J we do this by _creating_ a Boolean list, one value (0 or 1) per item of
> z

Does the word "creating" here mean you can't do a relatively simple check
across array elements in J _without_ allocating memory? E.g., soft realtime
scheduling for such a simply operation is essentially just not possible in J?

~~~
jonahx
I don't recall offhand the answer to this specific question, but many common
operations in J are highly optimized under the hood. So even though you are
doing "conceptually expensive" things, J will often implement them efficiently
under the hood.

J is quite fast, shockingly so for an interpreted language.

------
kbbr
Unless you can leverage special instructions / hw to realize your array
operations, loops will be the fundamental operations generated by the compiler
executed by the CPU. For a lot of scenarios, loopless code is surely more
expressive. However, I would not say that it is per se in any kind superior.
Rather, it is a feature of each language, how much to abstract from the actual
CPU instructions.

So to have fine granular control over your generated code, classic loops in C
always relevant, while a simple array operation can be expressed in such high
level way for other use cases.

------
manmal
Swift provides many of the required Collection operators to go loopless - map,
reduce, flatMap, compactMap (eliminate nil values), filter, first, contains,
prefix/suffix, ... there’s even an OSS library that gives a compile time
guarantee that a Collection is non-empty.

There are only some instances where I ever need a loop:

\- map cannot produce a Dictionary, so Dictionary manipulations usually
require a loop

\- Sometimes, a more complex qualifier or stopping condition is needed. Like
striding through a Collection in a non-linear way.

~~~
dorian-graph
> \- map cannot produce a Dictionary, so Dictionary manipulations usually
> require a loop

Assuming you meaning mapping over a list, some sort of fold function could
produce a dictionary, in some languages, such as Elm. [1]

[1] [https://package.elm-
lang.org/packages/elm/core/latest/List#f...](https://package.elm-
lang.org/packages/elm/core/latest/List#foldl)

------
flexblue
Quote:

"(x + y) is an expression rather than a statement. The J programmer can embed
(x + y) in a larger expression, perhaps a matrix multiplication (w +/ . * (x +
y)) which adds the equivalent of three more nested loops, but is still a
single expression. Expressions can be combined; statements cannot."

What about an expression like (x * x + y * y)? This would still be a single
loop in C. Is J smart enough to figure that out, or will it turn that into
three loops?

------
chris_wot
That table if alternatives to loops is really interesting. I wonder if there
is a similar table for C++ in some form on another website?

~~~
ncmncm
cppreference.com

The Standard Library algorithms explicitly implement such a taxonomy.

------
agumonkey
I find that most programming languages are often wasting time on the notion of
plurals .. loopless sounds appealing there.

------
ridiculous_fish
First I've heard of J! Say you want to compute the product of integers in a
list. Should you encounter a zero, you can exit immediately since the product
is known. In C this would be an `if (arr[i] == 0) break`. How would one
express this in J's loopless style?

------
kerzol
There is a CS concept with a similar name, but completely different meaning:

[https://en.m.wikipedia.org/wiki/Loopless_algorithm](https://en.m.wikipedia.org/wiki/Loopless_algorithm)

------
fake-name
s/loopless/explicit loop/

------
theamk
If you really want to play with loopless programming, I recommend Python’s
numpy. It implements many of the same constructs, but using regular English
words instead of punctuation.

~~~
kccqzy
Well numpy doesn't totally obviate the need for loops though, although it is a
good mental exercise to try to write as few loops as possible.

------
joncp
I've gotten to where I see loops as a code smell. That includes thinly-veiled
loops like ruby's Enumerable#each.

~~~
tom_mellior
And instead you use...?

~~~
krferriter
Yeah I'd like to know also. Loops are a foundational operation. Instead of a
loop you could use a function/operator overload, which would just call another
function which contains the loop. But that isn't doing away with loops, it is
just encapsulating them inside a function and calling them in order to avoid
polluting up the higher level code context with loop code, and makes that
upper context easier to read.

~~~
wffurr
You described the GPs point perfectly. This is also covered in the article; it
exhaustively covers the use cases for loops and provides higher level
functions and operators instead.

~~~
tom_mellior
It covers the use cases for loops over arrays/lists, not over more general
graph-like structures.

~~~
wffurr
For which there exist a slew of higher order graph traversal algorithms.

