
Playing with Syntax - jsnell
http://stevelosh.com/blog/2016/08/playing-with-syntax/
======
reikonomusha
I agree with the overall message of the article (Lisp is a good substrate to
explore syntactic ideas) but find the article just sort of wandering aimlessly
to construct some macros of dubious value.

Don't Repeat Yourself is a common tenet of programming. Lisp gives you the
opportunity to virtually never repeat yourself because making syntactic
abstractions is cheap and easy. It must be recognized however that
abstractions range from totally useless to paradigm-shifting, and DRY isn't
about saving typing but rather encapsulating ideas into reusable constructs. A
common problem I see in production Lisp code bases is the existence of a
programmer's set of pet syntactic abstractions that don't really have a high
ceiling; you see all kinds of zaps, frobs, and lets for a reason no other than
they save some typing. I do not appreciate such abstractions. The Lisp
syntactic abstractions I do appreciate are ones that bring me into a new
paradigm of thinking. For instance, Lisp includes a set of macros to express
complex iteration readably. Without this, iteration wouldn't be much more than
writing (un)conditional jumps, and by having the abstractions, you shift your
thinking from the notion of jumps to the notion of traversal. Another recent
example is a set of macros to simplify index wrangling in tensor algebra.
These "macros" already existed in contemporary math in the form of constructs
such as Einstein notation and they make reasoning and thinking about tensors
easier.

This article, in my opinion, treks to the destination of a macro that, while
perhaps neat, isn't going to give you higher quality programs.

~~~
defgeneric
> A common problem I see in production Lisp code bases is the existence of a
> programmer's set of pet syntactic abstractions that don't really have a high
> ceiling; you see all kinds of zaps, frobs, and lets for a reason no other
> than they save some typing. I do not appreciate such abstractions.

I found this style irritating at first, but after programming in CL for a
while you begin to appreciate the reason for it.

The CL specification is basically frozen, which means no new language
features. That has led to a collection of "canonical" libraries like
alexandria, which brings in a lot of the batteries-included stuff you'd
expect. However, if I need a version of `map` or `zap` or something, I'd
sometimes rather just write the macro myself and save the external dependency.
So a lot of these little single-purpose macros you find all the time are a
symptom of the way CL is frozen. Notice how Clojure doesn't have this problem
in the way CL does.

Another reason for the prevalence of pet syntactic abstractions is that this
is actually to a certain degree the way you're supposed to program in CL. The
idea is that you start with a runtime "lisp image" which is typically the `CL-
USER` package, and your program makes modifications to the lisp image.

~~~
platz
Contrast this with the dictum in Haskell land that typeclasses (abstractions)
should have laws to justify their existence. just any random abstraction will
not do. although in the small, higher-order functions seem to fill in most
gaps. 'syntactic abstractions I do appreciate are ones that bring me into a
new paradigm of thinking' indeed.

------
userbinator
_It turns out that SBCL is really quite good at optimizing let statements and
local variables._

That is not surprising for any dataflow-based compiler, since "a = b" by
itself won't generate even one extra instruction unless those two variables'
values get used independently, creating a "fork" in the path and necessitating
a copy.

However --- and this is a good example of how compilers, especially for very
high-level languages, are still far from optimum even for trivial optimisation
--- the majority of those instructions are still unnecessary. Ultimately, the
purpose of that code is to perform one addition and one modulus. Doing that
should not take _over two dozen instructions_ on any sane architecture.

At a glance I can already see that line 9 is superfluous: it writes the
quotient into RBX, where it will never be used again. And there should not be
any reason to check for 0 explicitly if you are going to just raise the
exception anyway --- the CPU will do that automatically.

If I analyse that snippet a little more I could probably find more ways to
optimise it, but it would be easier and faster to just show what a human, and
presumably a better compiler, could do:

    
    
        add rax, rcx
        cqo
        idiv rdx
    

Assuming we are using registers for arguments and have a choice of where the
caller expects the return value, those 3 instructions are all that's
necessary, and the cqo is only because x86's divide takes a double-width
dividend.

 _This is why I love Common Lisp. You can be rewriting the syntax of the
language and working on a tower of abstraction one moment, and looking at X86
assembly to see what the hell the computer is actually doing the next. It’s
wonderful._

I want to believe that compiler optimisations are actually good enough to
"collapse the abstractions" and make these languages generate nearly the same
code as an expert Asm programmer, but when a simple function, even at maximum
optimisation, turns into 23 instructions in 70 bytes vs. the 3 instructions in
8 bytes that I'd expect, it's rather disappointing.

~~~
junke
Well, if you want a correct result with negative numbers, you have to
implement the modulo operation, not remainder. The distance in the example can
be negative, so can be the sum, but you want the resulting position to always
be positive.

There are other instructions that probably deal with returning a proper
fixnum. If I remember correctly, the CLC before the RET is to signal that
there is a single return value, not multiple ones. I am not saying that there
aren't possible optimisations, but (i) some things are tied to the existing
computation models and (ii) SBCL developers have limited resources.

See also SB-ASSEM to emit machine code
([https://www.pvk.ca/Blog/2014/03/15/sbcl-the-ultimate-
assembl...](https://www.pvk.ca/Blog/2014/03/15/sbcl-the-ultimate-assembly-
code-breadboard/)).

------
expression
I guess I'm never to actually “get Lisp” to appreciate its syntax.

>Aside from the prefix ordering, Common Lisp’s syntax is already a bit more
elegant because you can set arbitrarily many variables without repeating the
assignment operator over and over again:

    
    
        ; Have to type out `=` three times
        x = 10
        y = 20
        z = 30
    
        ; But only one `setf` required
        (setf x 10
              y 20
              z 30)
    

I utterly fail to see the aforementioned elegance, although I certainly can't
miss the line where it happens.

~~~
JohnStrange
Having programmed many substantial projects in both worlds, I have to admit
that I much prefer S-expressions over any other syntax -- without complicated
macros that introduce some sort of keywords or other tricks. With
S-expressions, there is basically just one syntax to learn for every
construct, so you can focus on the semantics. (I do prefer Scheme's way of
dealing with functions, but that's another matter, of course.)

Unfortunately, I otherwise prefer strongly typed systems languages with a
strong focus on compile-time, zero cost solutions such as Ada or Rust. My
ideal language would be a very fast, statically and strictly typed language
with a modern incremental garbage collector that can be switched off and
_without_ type inference, but with an S-expression syntax.

However, if such a language existed or I'd develop it on my own, it probably
wouldn't gain much popularity... ;-)

~~~
qwertyuiop924
Shen is probably the closest you'll get.
[http://shenlanguage.org](http://shenlanguage.org)

It's strongly typed, but not a systems language, and very much not zero-cost.

There are some systems lisps with no GC. I just wish I could find them...

~~~
chriswarbo
A notable Lisp without GC is Linear Lisp
[http://home.pipeline.com/~hbaker1/LinearLisp.html](http://home.pipeline.com/~hbaker1/LinearLisp.html)

~~~
qwertyuiop924
...Hang on a second, that sounds a heck of a lot like NewLisp's memory
management model, with the disadvantages thereof.

------
abritinthebay
Every Lisp article promoting the "elegance" of its syntax will do the exact
opposite to non-users of Lisp.

This is no exception.

From the first example on there is not one case where the Lisp syntax is a
clearer expression of the concept and it's hard to justify that anything that
is less clear is elegant in the slightest.

Personally I'd put destructing assignment as more elegant for the first
example:

    
    
        [a, b, c] = [10, 20, 30]
    

But that's just me.

~~~
kazinator
That assignment has 9 pieces of extra syntax above a b c 10 20 30. By
contrast, (setf a 10 b 20 c 30) has only three: two parens and setf. Let's
count the crumbs:

    
    
      [a, b, c] = [10, 20, 30]
      1 2  3  4 5 6  7   8   9
    

In Common Lisp, you an express that 10 20 30 should go to a b c using values:

    
    
      (setf (values a b c) (values 10 2 30))
    

That _still_ has fewer overhead tokens!

    
    
      (setf (values a b c) (values 10 2 30))
      1 2   3   4          5 6            78
    

If they are bother make yourself

    
    
      (vsetf (a b c) (10 20 30))
    

This is easy to implement in a quick and dirty way:

    
    
      (defmacro vsetf (places exprs) `(psetf ,@(mapcan #'list places exprs))
    
      (macroexpand-1 '(vsetf (a b c) (1 2 3)))
      -> (PSETF A 1 B 2 C 3)
    

psetf is probably a good choice here since it is probably best for the
construct to have parallel semantics. That is to say, we want, for instance,
to be able to exchange variables with it:

    
    
      (vsetf (x y) (y x))
    

which will work thanks to psetf.

We could huff and puff a little harder in the macro and make it work like

    
    
      (vsetf x y = y x)

~~~
abritinthebay
That's a lot of work to demonstrate that Lisp's syntax is less elegant than
freaking ES6.

> TThat assignment has 9 pieces of extra syntax

Seeing as we're not playing Code Golf that does t really matter does it? What
it _is_ is clearer, easier to understand, and actually built into the
language.

Lisp is many things, syntactically elegant isn't one of them.

~~~
kazinator
I feel that I hadn't seen true beauty in a text editor window until I got
involved with Lisp, and that EczemaScript is designed by imbeciles.

I'm willing to try to put some kind of numbers on my point of view. Basically,
it boils down to token counts and the presence of implicit associativity and
precedence rules.

We can remove four square parentheses from the [ a, b, c ] = [ 1, 2, 3 ]
assignment, thereby significantly reducing the verbiage. However, we end up
with ambiguity. There is no universal precedence between comma and assignment.
For instance, in C (a language which indirectly inspired the syntax of ES), it
is the other way: the expression a, b, c = 1, 2, 3 in C will perform only the
c = 1 assignment. It's good that ES spends these extra tokens to reduce
ambiguity here.

Every extra token should count as one demerit point. Then, every instance in a
sentence where an implicit rule must be invoked to disambiguate the parse
should result in multiple demerits determined according to how the smallest
number of grouping tokens would have to be inserted to remove the ambiguity.
(That's being generous, because it doesn't take into account the _work_ of
determining the parse and doing the insertion.)

So for instance "Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo
buffalo" has a minimal number of tokens to express its meaning (which is
good), but there is a significant cognitive load to figure out its structure
from all the implicit grammar rules (bad). If we insert the parentheses we get
"((Buffalo buffalo) (Buffalo buffalo) buffalo) buffalo (Buffalo buffalo)".

Some functional languages have this "buffalo disease"; strings of words are
just catenated, and working out all the partial applications that are buried
in there and whatnot is a hairball.

~~~
abritinthebay
> Basically, it boils down to token counts and the presence of implicit
> associativity and precedence rules.

That's... a really arbitrary and contrived presumption. Not to mention
presuppositional (and circular) in nature. "I define what makes a language
beautiful as what Lisp is best at, therefore lisp is most beautiful."

It's things like this which cause Lisp evangelists to not get taken seriously.

Based on your comparison anyhow Python is much better with it's Tuples (ie -
a, b, c = 1, 2, 3) and lisp would be fundamentally worse.

~~~
kazinator
I see; so when you say things like _' Lisp is many things, syntactically
elegant isn't one of them'_ you are taken seriously, whereas if I say the
opposite I am not; and it's because I have tried to reflect over my biases,
and quantify them in some sort of objectively evaluable terms.

~~~
abritinthebay
No, it's _when you define "beautiful" as "equivalent to lisp syntax"_ that you
are not taken seriously.

Which you did. So you weren't.

------
tomjakubowski
An great thing about Lisp/s-expressions, overlooked in this article, is that
your code is divided hierarchically into little chunks that can be slurped,
barfed, cut, paste and rearranged trivially with structured sexpr editors like
Paredit or Smartparens.

"Take just this subexpression of a larger expression, move it into the
parameter list of the function call five lines down" is annoying using almost
any text editor with languages like C or Python. It takes about a half-dozen
composable keystrokes for sexprs in Emacs with Paredit.

~~~
majewsky
This sounds easy with the "%" movement in vim. Put the cursor before the
opening paren of the subexpression you want to move, "d%", then go to the
target and "p".

------
kazinator

      (defun move-ball (ball distance screen-width)
        (zapf (ball-x ball)
              (mod (+ distance %) screen-width)))
    

In TXR Lisp:

    
    
        (defun move-ball (ball distance screen-width)
          (placelet ((it (ball-x ball)))
             (set it (mod (+ distance it) screen-width)))
    

Trivial exercise: zapf macro expanding to the placelet form.

TXR Lisp's anaphoric operators like ifa and conda use placelet, so that their
"it" can be a place referring to the original:

    
    
       ;; decrement (ball-x ball) place if it exceeds 15:
    
       (ifa (> (ball-x ball) 15)
         (dec it))

~~~
kazinator
By the way, placelet isn't macrolet. In the above, the (ball-x ball) place
form is evaluated once to determine the place which becomes aliased by the it
symbol. The two occurrences of it in the (set it ...) form do not cause
multiple evaluation of (ball-x ball).

------
jiyinyiyong
I would still suggest editing AST direcltly
[https://www.youtube.com/watch?v=g0tAVjwuc1U](https://www.youtube.com/watch?v=g0tAVjwuc1U)
text syntaxes brings complexities, a lot.

~~~
_mhr_
You should submit that project as a Show HN, that is seriously cool.

~~~
jiyinyiyong
Maybe this link [http://cirru.org/](http://cirru.org/) , what do you think?

I wrote about it actually, Cirru Editor's demo is hard for trying out
[https://medium.com/cirru-project/stack-editor-programming-
by...](https://medium.com/cirru-project/stack-editor-programming-by-
functions-a961f1d9555c)

------
LeanderK
I like Lisp, but i really hate these posts. In Lisp you can do some powerful
stuff, but these posts (which come up) are just trying to promote the language
by being overly clever. Don't try to be overly clever when programming.

Also this is stupid:

    
    
      ; Have to type out `=` three times
      x = 10
      y = 20
      z = 30
      
      ; But only one `setf` required
      (setf x 10
            y 20
            z 30)

------
akkartik
It seems worth pointing out that credit for the name _zap_ should go to Arc.
(Unless it already existed somewhere else even before that?)

[https://github.com/arclanguage/anarki/blob/15481f843d/arc.ar...](https://github.com/arclanguage/anarki/blob/15481f843d/arc.arc#L1311)

------
rch
I'd like to see a service that lets people play with grammar specs
interactively, with working snippets.

------
qwertyuiop924
And all I can think is that all of this would be more elegant in scheme. In
particular, scheme's cut macro makes it practical to re-order args, as opposed
to implementing the % expansion used here. And it composes.

Or you could always just used readtables to implement your own version of the
Clojure lambda shorthand. I'm not clear on why Steve didn't do that. It's a
more effective alternative to implementing the % syntax in every macro.

------
ilovecomputers
Articles like these remind me that Lisp uses the perfect number of tokens.

------
divs1210
_swap!_ in Clojure is analogous to _callf_.

 _(swap! x + 5)_ atomically increments _x_ by 5.

 _(swap! x concat [3 4])_ atomically concats the seq _x_ and the vector _[3
4]_.

------
xytop
I'm not Lisp guy.. Please tell me what's that new syntax means in that
context?

I see it as just new functions, not language extension.

~~~
qwertyuiop924
They're not functions, they're macros: functions that are called with their
arguments unevaluated, and return code to be evaled in the environment of the
call.

------
mcphage
Does this article improve as it goes on? I got to:

> Aside from the prefix ordering, Common Lisp’s syntax is already a bit more
> elegant because you can set arbitrarily many variables without repeating the
> assignment operator over and over again

Which is both wrong, and pointless, and it didn't seem worth continuing.

