Hacker News new | past | comments | ask | show | jobs | submit login

> A concatenative programming language is a point-free programming language in which all expressions denote functions and the juxtaposition of expressions denotes function composition. The combination of a compositional semantics with a syntax that mirrors such a semantics makes concatenative languages highly amenable to algebraic manipulation.

This is the reason i hate so many (not all) wikipedia articles - the introduction is an exercise of code golf in human language.

This is devoid of the key quality most readers want from an encyclopedia (understanding from an ignorant viewpoint), intros should not be a puzzle for the reader which attempts to fit comprehensive technical description into a single useless sentence.




Lemme try in something closer to plain English:

In most programming languages, the way you compose functions is by creating some other function that takes whatever arguments are necessary, applies the first function to some subset of them, collects its result, and then applies the second function to some other subset of the arguments, plus the result of the first function.

IOW, while the syntax can be quite clean:

  let h(x) = f(g(x))
is doing quite a lot in the background. Moreso if some of the top-level arguments are to be passed to f and not g:

  let h(x, y) = f(g(x), y)
In a concatenative language, all you have to do to compose functions is place them one after the other. So those two compositions both become something more like

  let h = g f

One big difference here, and the main part of how most concatenative languages achieve this behavior, is that arguments are never stated or passed explicitly. Instead, inputs and outputs are typically retrieved from and placed onto some sort of stack.

(edit: Just realized never really said why they're called "concatenative". It's because, at least in a Forth-style concatenative language where the compiler isn't doing any fancy footwork, you can compose functions by literally just concatenating them. So that `let h = g f` is basically saying, "To execute H, first execute G and then immediately execute F.")


Most languages that aim to be minimal bake in a minimal syntax, but without implicit parameter passing, composability is not going to be extensible.


In a forth-like language, you can think of all functions as implicitly being unary: They take a stack, and return a stack.

I've never used any other kind of concatenative language, so I'm less able to speak to them, but I'm guessing some similar situation applies for those as well.


How about:

"Concatenative programming style emphasizes function composition over function application. Complex operations are composed from many simpler operations, and the actual program input or state does not need to be referred to explicitly. Often, these composed operations resemble a data flow or pipeline, where the implicit input is passed along to each operation in sequence."

(Disclaimer: not an expert on this style, just offering my own description that is hopefully correct and easier to understand.)


Functional programming also emphasizes function composition. Especially anything that takes its intellectual heritage from John Backus. (viz, not lisp.) His seminal paper on the subject emphasized that, in well-designed functional languages, functions take only a single argument, because that's necessary to support a good composition-oriented programming style.

I think the root of the distinction is more about how you achieve that composition: Does it smell more like Alonzo Church or Alan Turing?


The actual wikipedia page has a bunch of links that the OP doesn't include. To me, this is the opposite of code golf; rather than trying to concisely express the concept in the language's "builtins", it instead delegates the underlying concepts to their own "functions", and only describes the relationship between these concepts.

https://en.wikipedia.org/wiki/Concatenative_programming_lang...

If you know what the linked terms mean, you can move on quickly. If you don't, then explaining the higher-level concept in a way that skips these building blocks would be a waste of time; it's a better use of everyone's time for readers to at least skim the linked articles, first.


Wikipedia tends to be technical about math/CS subjects. In other fields, you can use it as a textbook for a 101 course - I once heard of an astronomy professor who did this, and this is what I recommend people do with linguistics. (Except syntax, which, as the most CS-adjacent subfield, gets the most technical treatment.)

The page for Joy, Manfred von Thun's concatenative language, is a little better:

> Joy is unusual ... in its ... lack of formal parameters.

So, where in a standard ALGOL- or C-type language, you'd define a function that squares a number like this:

  def square(x):
    return x * x
And in Haskell... OK, I don't know Haskell, but if I click through tryhaskell.org a bit, I see this:

  let square x = x * x in square 2
But in Joy, the way to write a function that squares a number doesn't involve explicit references to parameters at all:

  dup *
Or, if you want to name your function:

  DEFINE square == dup *.
Every function takes a stack and returns a stack. `dup` takes a stack [A B C...] and returns [A A B C...], for any values of A, B, and C. `` takes a stack [A:int B:int C...] and returns [AB C...].

Now, Joy in particular was written as a research language about recursive combinators. So a mostly-idiomatic* factorial function in Joy looks like this:

  DEFINE fac == 
    [0 =]
    [1 +]
    [dup 1 -]
    [*]
    linrec.
And then you call that with `5 fac` or something.

The neat thing about this is that you don't need to define functions and reference them from inside themselves in order to get recursion, and you can write code that's a little more legible than using the Y combinator for everything. You could also write something like this:

  DEFINE fac-explicit-recursion ==
    [0 =]
    [pop 1]
    [dup 1 - fac-explicit-recursion *]
    ifte. 
But you don't have to.

`linrec` is a function that expects four quoted functions as the top elements on the stack. It executes the first function; if that 'returns' true, it executes the second one and stops, and if not, it executes the third function, recurses, and then executes the fourth.

A while ago, I started working on a Joy interpreter in the browser. I haven't gotten around to finishing it yet, but it's at least working well enough that you can (hopefully) step through the https://defseg.io/joyjs/

For a flashier example (which won't work there right now, because I haven't implemented... `binrec`... yet), here's quicksort:

  [size 1 <=]
  []
  [uncons [>] split]
  [[swap] dip cons concat] 
  binrec
`binrec` is another recursive combinator that expects four quoted functions. It handles the first two functions there the same way as `linrec` - executes the first (here, "does this list have one or fewer elements?"), and if true, executes the second (here, do nothing).

But if the condition is false, it executes the third function to produce two values, recurses on both, and then uses the fourth function to combine them. So you get your pivot, split your list, recurse on both lists, and then stitch the two result lists and the pivot back together in sorted order.

* 'Mostly' because you'd actually write `[null] [succ] [dup pred] [*] linrec`, but the two functions are identical. (`null` is overloaded to also test for zero elements in an aggregate, but if you pass it an aggregate it'll throw a type error anyway.) Similarly for quicksort: `[small] [] [uncons [>] split] [enconcat] binrec`.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: