Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Lisp macros are cool, a Perl perspective (2005) (warhead.org.uk)
141 points by pmoriarty on Dec 4, 2014 | hide | past | favorite | 47 comments



Source filters are bad because you get a string and parse it yourself and then get it wrong for edge cases. What if Perl already parsed a piece of source for you and handed you a data structure instead? That fixes the greatest shortcoming.

Solution 1: B::Deparse <http://yapceurope.lv/ye2011/talk/3597>, <http://act.yapc.eu/gpw2014/talk/5266> Solution 2: B::CallChecker / <http://perldoc.perl.org/perlapi.html#cv_set_call_checker>

In other words, you do have access to the parser. It's not as neat as in Lisp because of the lack of uniform syntax, but still allows for powerful program transformations: <http://p3rl.org/Kavorka::Manual::Signatures>


> In Lisp, source filters never break.

> Never.

To be perfectly fair, this is not quite true. In Common Lisp (the dialect being used here), macros can break in rare, subtle ways. The obvious happens when you don't use GENSYM:

    (defmacro foo (var)
      `(let* ((x 3))
         (setf ,var x)))
    
    CL-USER> (let ((x 7))
               (foo x)
               x)
    7
    
    CL-USER> (let ((y 7))
               (foo y)
               y)
    3
but is fairly easy to avoid. In practice, as long as you make judicious use of GENSYM and packages, the only thing you really need to worry about is someone FLETing a symbol in the package COMMON-LISP (or maybe an external symbol in your package):

    (defmacro bar (x y z)
      `(list ,x ,y ,z))
    
    CL-USER> (let ((list '(7)))
               (bar 1 2 3))
    (1 2 3)
    
    CL-USER> (flet ((list (a b c)
                      (+ a b c)))
               (bar 1 2 3))
    6
SBCL actually yelled at me when I tried this for violating the package lock on COMMON-LISP. So in practice, it's really not much of an issue, and is certainly a hell of a lot better than C macros.

If you're interested in learning more about this, look at Scheme's hygienic macros.


It's actually illegal to rebind symbols in the package common-lisp:

http://www.lispworks.com/documentation/HyperSpec/Body/11_aba...

[edit]

And as a note, one ought-not rebind symbols in other packages without taking into account macros from that package. This is the Common Lisp solution to the hygiene problem: don't bind symbols from other people's packages unless you want to change the behavior of other people's code. Any macro that only relies on the bindings of unexported symbols (and symbols from the CL package) is hygienic by default to users from other packages.


Variable capture in CL can be avoided with few helpful macros.

    (defmacro WITH-GENSYMS (syms &body body)
        `(let ,(loop for s in syms collect `(,s (gensym)))
         ,@body))

    (defmacro WITH-UNIQUE-NAMES (vars &body body)
        `(let ,(loop for var in vars
           collect `(,var (make-symbol ,(symbol-name var))))
        ,@body))


    (defmacro REBINDING (vars &body body)   
        (loop for var in vars
            for name = (make-symbol (symbol-name var))
            collect `(,name ,var) into renames
            collect ``(,,var ,,name) into temps
            finally (return `(let ,renames
                           (with-unique-names ,vars
                             `(let (,,@temps)
                                ,,@body))))))


   (defmacro ONCE-ONLY ((&rest names) &body body)
      (let ((gensyms (loop for n in names collect (gensym))))
        `(let (,@(loop for g in gensyms collect `(,g (gensym))))
             `(let (,,@(loop for g in gensyms for n in names collect ``(,,g ,,n)))
                ,(let (,@(loop for n in names for g in gensyms collect `(,n ,g)))
                   ,@body)))))
edit: added once-only


Let over Lambda uses this (maybe something like this?). Very cool book.

http://letoverlambda.com/


Don't forget ONCE-ONLY


Another place where macros in Lisp fail is that they may evaluate an expression twice. (Using the Scheme dialect, because I don't write good enough CL.)

  (define (inc! x) (set! x (+ x 1)) x)
  
  (define-syntax-rule (square x) (* x x))

  (define n 5)
  (square (inc! n)) ;==> 42
This can be solved using a temporary variable as in the original article,

  (define-syntax-rule (skuare x) (let ([temp x]) (* temp temp)))

  (define m 5)
  (skuare (inc! m)) ;==> 42
But it's easier because you don't have to declare the type of temp. Also Scheme has hygienic macros, so you don't need to use gensym to avoid the variable capture of temp.


> Another place where macros in Lisp fail is that they may evaluate an expression twice

Strictly speaking, macros don't evaluate their expressions at all.

Square is a bad example, though. Macros are not for writing function-like expressions; they're for extending (or more generally transforming) the syntax of the language. Inline functions and compiler macros are more appropriate tools for things like square.


> Strictly speaking, macros don't evaluate their expressions at all.

OK, you are right.

A better example is a bad implementation of a short circuiting or:

  (define-syntax-rule (bad-or x y)
    (if x x y))
It can't be a function because sometimes you don't want to evaluate y, but in the resulting code x is sometimes executed twice.


> there is no syntax

that doesn't seem very true, either. From looking at it, I have no idea what the comma in ',var' means or what '((x 7))' is supposed to mean.


> that doesn't seem very true, either

Strictly, it isn't. The point isn't that there's no syntax, though; it's that there's very little syntax, compared to other languages (consider C++'s… what, 17 operator precedence levels? And the Turing-complete parser?).

The comma in ',var' and '((x 7))' are examples of Common Lisp's built-in syntax. The former is part of a shorthand notation[1] for injecting literal code into templates. The latter is part of the syntax of LET[2].

[1] http://www.lispworks.com/documentation/HyperSpec/Body/02_df....

[2] http://www.lispworks.com/documentation/HyperSpec/Body/03_aba...


An easier to read ("=3D" converted back to "=") version of this post: [1]

Previous discussion of this post on HN: [2]

Also see: [3]

[1] - https://bpaste.net/show/a9bb9668beac

[2] - https://news.ycombinator.com/item?id=795344

[3] - http://stackoverflow.com/questions/267862/what-makes-lisp-ma...


Very excellent argument against C macros in any case.


Yep. Even as a long time C hacker who knows to avoid the common pitfalls, I wasn't aware of just how many ways there are to botch up the use of macros.

There are lots of cases where macros look like the obvious and elegant solution to a problem. Some of those cases are bugs waiting to happen.


Nicely summarized, indeed.

His bemusement at the reaction of C++ proponents made me laugh: "But I did not think at the time that this would be controversial. I was sure that even the most rabid C++ fans would agree with me that the C++ macro system blows goat dick...."


Using setf is a nice example of something which a macro can do which a function cannot (as well as being a good example of code-is-data).

Lots of posts discussing lisp macrology lack that kind of compelling use case.

The 'magic' used by setf is a 'destructuring bind': http://dunsmor.com/lisp/onlisp/onlisp_22.html


reddit oldest (from 8ya) thread about it is interesting http://www.reddit.com/r/hackernews/duplicates/2o90ty/why_lis...


Perl6 advent calendar post (from 2012) on macros might be of interest http://perl6advent.wordpress.com/2012/12/23/day-23-macros/


Could someone explain why all of the = signs have 3D after them?

$x =3D ...


The link is to a web archive of a a mailing list. I'm guessing the software doesn't properly handle the quoted-printable format of the e-mail.

In QP a = is used as the escape character, and =3D is the escape sequence for =.


Does anyone know the basic differences between Julia's macros and Lisp macros? Are they the same or is one more powerful/flexible than the other?


Julias AST is very similar to an S-Exp so the macros in that sense have a lot of the same benefits of a lisp macro.

However they don't have the same ease since the AST does not necessarily match the way the source looks. In Lisp the AST looks exactly like the source so it's easier to visualize what a macro needs to do.

Julia requires you to mentally translate the source code syntax into an AST as an extra step. That extra step doesn't make them any less powerful/flexible though. It just makes them slightly harder to use.


Only loosely related, but I would love to see multiple dispatch come to Julia's macros. I seem to spend a fair bit of each macro switching on the expression's head, and would happily offload that step. (Of course, that might be growing pains since I don't really have a good feel for writing macros in the first place.)

Naturally, the best way to make that happen would be for me to write a macro implementing it... :)


OT, I wish JS has macros. There is sweet.js, but I get the feeling its not used very much....


So how does one write the (square x) macro in Common Lisp? Presumably, one would like to avoid evaluating the argument twice as well as accidental variable capture.


Macros are powerful but still a bad idea. Like GOTO - using GOTO, you can implement absolutely any kind of weird control flow structure you can imagine. You have a lot more freedom than the usual if/else/for/while. And you can, if you use it wisely, make code that's more coherent and consistent than any other. But people don't.

Coherence of assignment is a very good idea. But you don't need macros to get it. If you have a language with simple syntax where assignment is just another function call (perhaps via a typeclass), everything in this example still works; it's not the fact that lisp has macros for assignment that's important here, it's the fact that assignment is not special, not handled specially in the parser; it's just another function. You do that, and you get the advantages, with or without macros.

Lisp's lack of syntax makes macros easy, but they're still a mistake; they still lead to unreadable code because you can't tell what a given piece of code does until you understand which parts are macros and understand all of them. Give users more structured ways of achieving the (often quite small) number of things they want to do with macros. If they need more things, add those things as first-class features, the same way e.g. exceptions were added when people realized if/else/for/while didn't capture all the use cases for goto. But don't throw up your hands and let every library rewrite code willy-nilly.


Sigh, the tired argument that because people may use a powerful feature wrongly, that feature shouldn't exist.

>they still lead to unreadable code

Complete unsubstantiated nonsense.

>If they need more things, add those things as first-class features

If JavaScript had macros we could have had 'let' before ES6 instead of writing '(function(x) { ... })(42)'. Do I really need to know that 'let' compiles to a lambda with arguments applied? No, I just need to know that 'let' binds some values to variables and evaluates the body. Do I need to know that 'cond' compiles to a nested 'if' chain to know how to use it? No.

I don't understand why people think that syntax has to be behind a gate that only the language implementors have access to. Syntactic abstraction makes talking about problem domains easier, not unreadable.


Javascript has sweetJS. It isn't lisp, but it's about as close as most infix languages can get (with the exception of Dylan).

The let in ES6 cannot be expressed in the same terms as a let in lisp. Mozilla proposed that kind of let with the let block ('let (<assignments>) { <statements> }').

The issue is that other idiots (in some quest to turn JS into Java or C#) decided that a block scoping kluge was a better. After all, it makes interoperability harder (old code using var), makes scoping rules implicit (blocks without let are still bound to function), adds unnecessary cognitive overhead (leading to more bugs as code gets harder to reason about and visually parse) and generally adds a whole new set of ways for new and old JS programmers to screw up.


It is true that macros are regarded as a last resort mechanism: whenever you can use functions, or generic functions, you are encouraged to do so.

Also, macros are nothing more than functions: they operate on the syntax tree, at compile-time, but they are still functions. All language implementations (compilers, interpreters), will have somehow a need to manipulate syntax trees: in CL, this facility is exposed to the programmer (hence, the "programmable programming language" quote from John Foderaro).

> If they need more things, add those things as first-class features,

That's not the philosophy behind Lisp. You don't wait for a new relase of the standard (C++11, C++14, ...) and for compiler support: if you need to do crazy things with syntax trees, you are welcome. Then you may pulbish it and other people will use your libraries, like it is done for any other libraries in any languages.

> they still lead to unreadable code because you can't tell what a given piece of code does until you understand which parts are macros and understand all of them

So, in order to understand a pice of code, I must: (1) know what is, or isn't, a macro, and (2) if it is a macro, know what it does. The same applies for any function: don't apply a function if you don't know what it does.

You can rely on docstrings, macroexpand, as well as live coding techniques (describe, inspect) to help you understand complex operations. But saying that macros make code inherently "unreadable" is not justified: what about monads? do they simplify code or complexify it? like macros, they introduce an abstraction (loop) and can help with consistency (with-open): some people dislike it, because they feel they loose sight of what is "really" happening, but this is not a bad idea by itself.

> But don't throw up your hands and let every library rewrite code willy-nilly

Strawman: people don't really do that.

Regarding your example, I really would like to know how typeclasses can be used to solve `(setf (car list) x)`. If you have a concrete example of what you mean, for example in Haskell, please explain.


> So, in order to understand a pice of code, I must: (1) know what is, or isn't, a macro, and (2) if it is a macro, know what it does. The same applies for any function: don't apply a function if you don't know what it does.

The problem is the nonlocality. A function call can only ever have an effect at the exact point in the syntax tree where it is (side-effecting functions are bad); you can work your way up, understand each argument in isolation, then understand the function call applied to those arguments, and then you understand the value of the whole function call expression and can continue working your way up the tree. There's no such assurance with macros, because a macro arbitrarily far up in the tree could be changing the meaning of a token many lines further down.

> Regarding your example, I really would like to know how typeclasses can be used to solve `(setf (car list) x)`. If you have a concrete example of what you mean, for example in Haskell, please explain.

I'm a Scala guy rather than Haskell. You can't do it with language-level state, but you shouldn't be using that anyway; lenses (inside a state monad) give you that symmetry between get and set (apologies if I mix up the syntax, there are a couple of different implementations):

    (for {
      x ← list_ |-> head toState
      y = x * 2
      _ ← list_ |-> head := y
    } yield {}).run(someList)
:= is just a method and Lens is just a typeclass; you can write your own instances (though for most "normal" cases you just derive them with a 1- or even 0-liner) for your own types and then the same state-manipulation code will work when that state is contained inside one of your objects.


I forgot about lenses, thanks.

About non-locality of macros vs. functions:

Macros, being functions, provide this kind of assurance, too at the syntax tree level: you only operate on the local, current subtree. They can indeed modify the meaning of symbols in that subtree, but I think you are taking things to the extreme. Off course, an arbitrarly nasty macro "far up in the tree" (how deep are the typical functions, anyway?) could do horrible things. Also, we must ban sugar because if someone eats too much sugar, that someone dies.

However, the average usage is more subtle. For example, when you DEFUN a function, you are registering a function in your current environment (side-effects) through a macro, which does not dramatically change the semantics of the body of the function, compared to, say, an anonymous LAMBDA (the only change is, I think, the ability to RETURN-FROM a named function).

CL is not a functional language: you can loop over lists or across vectors, you can setf special variables and mutate strings. You have static and dynamic non-local exits (return, throw/catch, signal) and even goto's (tagbody). That does not mean that you should use them all and go against best practices and common sense. Just because a feature could be misused does not make it irrelevant.

For the record, I have nothing against functional languages (or any other language, really (except of course PHP ;-)). It just seem unfair to attack macros. They should be avoided whenever possible, but they have some uses: C (macros), C++ (templates), and even Haskell and Scala try to have them:

http://scalamacros.org/usecases/index.html

https://downloads.haskell.org/~ghc/7.6.1/docs/html/users_gui...

http://neilmitchell.blogspot.fr/2007/01/does-haskell-need-ma...

Finally: macros are not a hack used to fix a supposedly missing feature of the language (e.g. "I don't need macros because I have lazy evaluation", etc.). Macros allow programmers to interact with the compiler (reader macros, compiler-macros): they are the primitive provided to perform meta-programming tasks.


Yeah, I think they're a mistake in Scala too. Hopefully I'm wrong.

I think "the primitive" is a good characterization; I view writing a macro the same way I view using a mutex directly, or as I said, goto (or, less trollishly, manually expressing a program in continuation-passing-style). That is, these things are occasionally necessary, but it's usually a sign that your higher-level constructs are inadequate.


Sure, macros can help define otherwise lacking features: you don't have promises in CL? (ql:quickload :cl-async-future). Here, you have promises.

But that is not my point. What I tried to say is that macros are orthogonal to high-level constructs because they are used at the compiler level, for meta-programming purposes.

For example, take "cl-ppcre", a.k.a. perl-like regexes.

When you do (ppcre:replace-regex "<my regex>" ...) with a constant string, there is a compiler-macro that calls create-scanner at compile-time to avoid parsing and building the scanner at runtime.

Maybe you think that regexes should be built-in and that a sufficiently smart compiler should be able to do the same job. But how is this different than what we have in CL? the compiler is sufficiently smart because we allow the programmer to interact with the compilation process.

Macros are adequate high-level constructs when you want to manipulate programs as data.


A sufficiently smart compiler probably should be able to inline constant expressions, yes. And that's a transform that we know is safe, and I would want my compiler to have quite a lot of assurance that its constant-inlining worked correctly (probably strong typing and high test coverage). I think it's worth imposing a bit of structure on any code that runs in the compiler (e.g. compiler plugins at least have a test suite and a release process - and even then I wouldn't use a plugin (other than maybe a readonly analyzer) on production code). And of course I'd like my compiler code to be structured well, such that it could be used as a library (or indeed, such that it is a library, from the point of view of those compiler plugins). But having a good library for transforming source code doesn't mean I want to allow random one-liner snippets to transform my source code.


The regex example is more about partial evaluation than constant inlining. So instead of commenting on "random one-liner snippets" and "structure", which I find boring, I'd prefer to share an example of partial evaluation techniques applied to Haskell:

http://repository.readscheme.org/ftp/papers/coutts_transfer_...

The whole essay is interesting. Then, at section 4, we discover that the author relies on Template Haskell:

Using the Template Haskell infrastructure confers a number of advantages. There are the straight-forward obvious advantages that TH provides an abstract syntax tree, a parser, pretty printer and a monad providing unique name supply and error reporting. Because it is part of a compiler, the compiler also does the other ordinary semantic checks including typechecking. This is obviously useful since our partial evaluator can only be expected to work for well formed and typed Haskell programs and so we are spared from performing these checks. A less obvious advantage is that we may be able to do binding time checking in terms of a slightly modified Haskell type system and implement it using the existing compiler’s type checker rather than having to write one from scratch.


> That is, these things are occasionally necessary, but it's usually a sign that your higher-level constructs are inadequate.

Could you clarify with some examples? What constructs are you referring to that are inadequate?


If you're using a mutex directly, it's because your higher-level concurrency constructs - task queues, actors etc. - are inadequate. If you're using goto, it's because your higher-level control flow constructs - if/for/while, exceptions - are inadequate. Likewise if you're using macros, I'd argue it's usually because your within-language constructs are inadequate; things like LINQ (which I've seen people use macros to emulate in other languages) are important and common enough that there should be a standard way of doing them in the language.


Maybe I'm not seeing the forest for the trees here but I still don't understand what constructs you are referring to which are inadequate. Most Lisps are built from a very small selection of "special forms" and the rest is built in Lisp itself. From this perspective it's not a very large language at all. Is there a specific construct you had in mind?

Mutexes, goto... I'm afraid I don't understand. CL, for example, is an ANSI specification and makes no mention of threads or threading. That hasn't stopped implementations from providing OS support for threading and library authors from maintaining cross-implementation compatibility libraries. Mutexes are just one well-known method for solving a very specific problem with shared-memory threads. It's not even a primitive in the language itself. Macros helped us make threading possible without having to call together a new ANSI committee to extend the language specification.

In a similar vein I'm sure there are plenty of LINQ packages available in CL with all of the attendant parallelization patterns, etc.


> Is there a specific construct you had in mind?

No, what I have in mind is the things people use macros for. I've talked about assignment or LINQ; maybe you should talk about a specific case where you think macros are a good solution?

> Macros helped us make threading possible without having to call together a new ANSI committee to extend the language specification.

ANSI committees aren't just for fun. Maybe you should've called one - threading is a pretty fundamental language feature and worth standardizing across all implementations.


> I've talked about assignment or LINQ; maybe you should talk about a specific case where you think macros are a good solution?

If SETF didn't convince you (and it's much more interesting than the original author made it out to be) then perhaps a Lisp->GLSL compiler will[0]?

Or perhaps the various destructuring macros (WITH-OPEN-STREAM WITH-OPEN-FILE) which handle the bindings and returns over the primitive forms for you?

Or a more contentious example: LOOP and FORMAT. LOOP generalizes almost any looping pattern imaginable into a small DSL. Some dislike it because you can't macro-expand into it but then that's why we have ITERATE. Whichever you choose just compiles down to the primitives ultimately.

There is already a significant body of literature which investigates the amazing benefits of Lisp macros. I suggest reading a few. Let Over Lambda is a particularly interesting tome.

I ask for your reasoning only because I've never encountered it before. I've heard the usual, "it's too powerful," argument enough to not be swayed by it anymore. However... macros as a sign of the inferiority of the language itself? That is too rare and, it appears, unfounded.

> ANSI committees aren't just for fun. Maybe you should've called one - threading is a pretty fundamental language feature and worth standardizing across all implementations.

Plenty of people new to Lisps have waltzed into c.l.l or #lisp over the years and asked why a new ANSI committee hasn't been formed to "modernize" the language. The answer I've heard repeated ad nauseum is that the specification was drafted so that another committee wouldn't need to be formed. The language definition only defines a very primitive baseline and the rest is extensible by the implementers and users.

Threading works absolutely fine in any supporting implementation. I've written highly threaded code without a hitch in CL. It's not a problem that it's not defined in an ANSI specification.

And perhaps I'm wrong but I don't think "threading" is not a language issue so much as a facility provided by the operating system and underlying hardware. POSIX is the standard that defines the threading API for most Unix-like operating systems. Windows defines its own.

And I'm pretty sure that the IEEE committee didn't publish standard until 1995 which is a year after the ANSI CL specification. I would hazard a guess that the committee at the time didn't see the point in crystalizing the specification around an incomplete and as-yet unpublished specification for a single platform. That was something the implementers could handle on each platform they supported... and the library authors could handle with cross-implementation compatible packages.

So no, I don't think I need to call together an ANSI committee and I'm pretty sure if I did I would be laughed out of #lisp.

Threading is important and it was taken care of. I don't really have a problem with the way it was built in CL. I just use the BORDEAUX-THREADS package if I'm writing a library or stick to the implementations APIs if I'm writing an application for a specific platform... which is oddly how you'd do it if you were writing a threaded C application too.

[0] https://www.youtube.com/watch?v=hBHDdYayOkE&list=UUMV8p6Lb-b...


> perhaps a Lisp->GLSL compiler will[0]?

I'm all for having multiple compilers and reusing the same AST-manipulation code in them. I just think the application of that to code should be a bit more structured, and not rely solely on programmer restraint.

> Or perhaps the various destructuring macros (WITH-OPEN-STREAM WITH-OPEN-FILE) which handle the bindings and returns over the primitive forms for you?

I'd expect a language to be able to do that; in Scala I do it with a library and for/yield. For/yield is still a bit magic, but it strikes a very good syntactic balance - code in a for-yield looks very similar to normal code, but the slight difference (← instead of =) makes it clear that something special is going on, even if you're many lines down from the line that tells you which monad it is.

> Plenty of people new to Lisps have waltzed into c.l.l or #lisp over the years and asked why a new ANSI committee hasn't been formed to "modernize" the language. The answer I've heard repeated ad nauseum is that the specification was drafted so that another committee wouldn't need to be formed. The language definition only defines a very primitive baseline and the rest is extensible by the implementers and users.

And as a result there are multiple incompatible solutions to the problem, and I get the impression that's contributed to lisp's unpopularity - admittedly I'm not a lisp expert. I've watched a similar scenario cause problems with async i/o in perl - multiple incompatible implementations, libraries that attempted to abstract over them but added their own layers of incompatibilities, and a language that became very unapproachable. I think the approach python has taken to async i/o - allow a lot of libraries to experiment with their own approaches initially, but then once the issues have been worked through and reached some level of consensus, standardize the API as part of the language - is a better one and makes the language more accessible.

> Threading is important and it was taken care of. I don't really have a problem with the way it was built in CL. I just use the BORDEAUX-THREADS package if I'm writing a library or stick to the implementations APIs if I'm writing an application for a specific platform... which is oddly how you'd do it if you were writing a threaded C application too.

I'm spoiled by Java (well, the JVM), which did standardize threading cross-platform as part of the language, and I think has reaped rewards from doing so.


> I'd expect a language to be able to do that

CL does do that. Those macros are defined by the specification. If they weren't then you could write them yourself.

If you need special syntax to help you understand your program you have access to the reader and can dispatch special macros to transform your syntax into ordinary lisp forms. I've done it to implement readers for little assemblers.

I don't see how the comparison to Scala is relevant. We're talking here about what the weakness in Lisp is that macros cover up. There are only 27 primitive forms described in the CL specification that everything ultimately "compiles" down to. Do you think there is a form which doesn't exist or one that is impaired which if replaced or fixed would make macros unnecessary?

Macros are such an intrinsic part of the language that I'm surprised to hear criticism from someone that they're a symptom of a design problem!

> I'm spoiled by Java (well, the JVM), which did standardize threading cross-platform as part of the language, and I think has reaped rewards from doing so.

It's really not that big of an issue. There are plenty of highly-threaded cross-platform applications written in nearly every language under the sun. CL does better than most because it can abstract, at the language level, the differences in platform implementations without touching the underlying implementation or specification.

I think the CL spec could have actually left more things under-specified like file systems. One ugly wart in CL is the file path specification. At the time the committee felt it was important to crystalize part of the specification dealing with the multitude of file system addressing available at the time. And a couple of decades later we find that wasn't really necessary as few of those file systems are in use today... yet a conforming implementation must provide this API. Terrible.

On the other side of the coin it was decided by the JVM designers that TCO wasn't necessary. Today they are wringing their hands trying to get it into the next release. The ANSI CL specification doesn't require implementations to implement TCO and yet most do without harm to anyone's code. The JVM decided the right level of abstraction was bytecode and now they have to be super careful not to break the world whenever they make a change. Lisp chose the language to abstract upon... which makes sense since it's a symbolic processing language (list processing, i believe, is a misnomer and an artifact of history).

They didn't specify a threading API and we have much more flexibility today than if they had settled on one.

What remains astounding to me is the quality of machine code some of the open source implementations can produce. SBCL is a wonderful compiler. CCL is also really good.


> We're talking here about what the weakness in Lisp is that macros cover up. There are only 27 primitive forms described in the CL specification that everything ultimately "compiles" down to. Do you think there is a form which doesn't exist or one that is impaired which if replaced or fixed would make macros unnecessary?

Macros as a compiler implementation technique for standardized features are great; it's non-standardized macros I object to. That said I do think that in this case things would be improved by a single language-level feature, the equivalent of do notation or for/yield (and sure, probably implemented via a (single) macro). Then WITH-OPEN-FILE/WITH-OPEN-STREAM could become ordinary library functions and the common part would be evident, rather than macros that happen to have similar implementations but not in a way that that similarity is exposed to the programmer. And a programmer wanting to add support for a similar WITH-MYCUSTOMTYPE construct would only need to write a function, not a macro, just like they do in Scala.


The common part is the special operator UNWIND-PROTECT. You can use it to define a function (not a macro). This is in fact a recommended way to design your program: define a CALL-WITH-X function and write a simple WITH-X macro that expand in a funcall to that function, with a more convenient syntax (see http://random-state.net/log/3390120648.html).

Scala resource management techniques are built upon try/finally blocks, aren't they? If you want, you can use Scala ARM, which is a non-standardized library. Then, you can use a monadic approach, an imperative one, etc. This boils down to convention and idioms, which are not standardized either.

You seem to think that there should be an autorithy which acts as the gatekeeper between the good things and the bad things that are added to a language, and that the ability to "mess" with the abstract syntax tree leads to chaos (I guess you alignment is Lawful Good). We disagree. CL is designed to evolve smoothly over time, and macros are one of the features that enable this.


scala-arm is not standard but it's just a library; it provides ordinary functions that still obey the kind of locality I was talking about. That is, any construct that's using a managed resource has a "<-" at the point of use, alerting you that something funny's going on.


Julia has macros denoted by an at-sign (e.g. @inline) to make clear that something special is going on. Understanding how to write effective macros takes some work - perhaps more than Lisp; I can't fully say, myself - because expression representation is less apparent to the casual user. This, combined with an emphasis on "don't use macros when a function will do", seems to have lead to fairly judicious use of macros in most of the code I have seen.

Good tooling can also help with the locality problem. For example, by showing macro expansions in-place when possible (and desired).


I've encountered countless situations where even a tiny macro made the code more concise, more readable and less error-prone.

Functions are there for direct transformation of some data.

Macros are there for some more sophisticated, not immediately obvious transformation of some meta data.

Functions are at a lower level, macros are at a higher level.

Please remember: your users will use your programming language in unexpected ways: you (as a language designer) are most probably not the smartest human on Planet Earth: so please let someone smarter than you add the things which you were not able to conceive beforehand, thanks!

Lisp thinks it this way. Congrats!!




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: