For me, lisp macros can be a bit vexing, I prefer to not have too many unique, one of a kind, meta-abstractions cooked up by teammates on a big project. A good, widely adopted lisp macro can be handy, for example John Wiegley‘s use-package makes my Emacs configuration much cleaner. I’m not sure where the right boundary is.
Decades ago I was quite enthusiastic about macro systems, but the experience of TeX programming made me realize that sometimes plain old functional or applicative abstractions are better than macro based ones no matter how domain specific they are.
For "internal" macros, I think it's important to stop thinking about them as magical abstractions. It's better to instead approach both constructing and using them as an exercise in making invalid states not expressible.
A macro lets you build a domain-specific abstraction that not only is extremely readable (matching the domain), but also ensures you just can't produce things that don't make sense domain-wise. In this way, such abstractions are much better than boilerplate you'd have to do with classical languages (though Haskell-like languages can probably get a similar effect through judicious use of types).
I've been wondering this for a while, and this seems like a reasonable time to ask it:
If I understand correctly, you use macros to change the syntax of Lisp. But if I'm trying to write a DSL, why would I need new syntax? Why aren't new functions and data structures enough?
Hmm. That fits with advice from Extreme Programming: "Pay attention to pain". When it's hard (or even tedious) to write something, pay attention to that. It's trying to tell you something.
That's usually the advice given to aspiring macro writers. You write ordinary code, but when you start feeling there's a higher-level concept there for which you repeatedly write the same boilerplate, that's when you consider using macro to introduce said concept as a first-class, explicit thing.
It's not that different than abstracting through functions or objects, but it allows you to abstract away the repetitive code structure as well.
> But if I'm trying to write a DSL, why would I need new syntax?
You don't necessarily. But you'll need code generation to implement your domain-level abstraction.
Within-programming domain example is OOP. CLOS is essentially a bunch of macros that bolts OOP on top of base Common Lisp. It unified and standardizes what was many experimental flavours of OOP systems that were also implemented as macros. Or e.g. pattern matching and logic programming - both implemented as libraries for CL, via macros.
Outside programming domain, I can imagine working with software e.g. simulating chemical reactions, where you'd want to have atoms, molecules, reactions, energy exchange, etc. as top-level concepts. You can of course model all these as data structures, classes, helper functions, etc. But Lisp macros allow you to take that and close up the abstraction, by building a clean interface that doesn't leak the underlying machinery. On top of that, you can shift that machinery to compile-time execution (but also reuse it at runtime).
1. Macros give you laziness. So if you want to conditionally execute part of what's supplied, with functions the user has to supply closures or functions.
(my-if condition then else)
Would execute each of those before a call to my-if, so keeping it a function you'd need:
(bad example, don't make your own if, but illustrates the idea)
2. You want to capture variables from the calling context, same issue as above. Consider the with-resource pattern:
(let ((foo ...))
(with-resource (lambda (r)
(do-something-with-resource r foo)))
Or the macro-d version:
(let ((foo ...))
(with-resource (r)
(do-something-with-resource r foo)))
3. You want to do something that can be done efficiently in Lisp but uses very low level stuffs (like tagbody and go). Rather than writing that (repeatedly if this is a recurring pattern, like for various state machines) you can present a lispier syntax that compiles (via macros) down to the low level primitives (do a macro expand on some of the do constructs). See [0] for a variation of this idea in Scheme.
4. You want to do something repeatedly and consistently, and want to remove errors. See the definition of defdot in [1]. You could define and register all those functions yourself for each now .<whatever>, or you could let a macro do the heavy lifting.
Lisp macros (as opposed to macros in other languages besides C) are especially powerful not because you're changing the syntax, as Lisp doesn't have a syntax in the first place, you're always working directly on it's Abstract Syntax Tree. So stuff like conditional, loops and functions all have the same structure, which is why you could already write something that looks like a loop but it's a function.
The difference is how it's arguments are evaluated, for example if I wanted to make a function that implements a for loop of the form (myfor a from 1 to 10 do (print a)), if it's a function then every argument will be evaluated immediately, so it will try to find a variable called a, from, to, do and it will also try to evaluate (print a) immediately before looping. Macros allows for delayed evaluation, as myfor will receive all arguments (at compile-time) as symbols instead of values, which you can then rearrange in a form that can evaluate properly at runtime. You can also go further if you actually want to create syntax and use reader macros, which allows you to write your own parser and therefore escape writing directly on the AST (then you can even write a C syntax within Lisp).
And if your question is: do I actually need them? The answer is obviously no as many languages do not support it (and there are even alternatives for many use cases, like lazy evaluation). The advantage is that your language can have a very simple core and features that were not implemented (say a pattern match structure) can be added entirely within userspace (which is also good for testing new functionality before adding to the core language). Macros are also very efficient since they run at compilation (so if you use the macro a lot of time you only have to evaluate them once, unlike functions that will usually have to run it's logic every time it's called). And all of that means that for DSL, it's not just making a nice adaptation for your domain within the host language, but effectively writing an optimized language for your domain reusing the compiler of the host language without having to change it's source code.
OK, I understand why you might want to do that. But there's nothing domain specific about that. I might want a decent looping construct in any domain. That's just trying to make a decent language, not a domain specific one.
Or is the idea that, as soon as I go beyond the Common Lisp standard, it's "domain specific", no matter how completely general my extensions are?
I've always interpreted DSL as creating a way to write programs in the language of the problem to be solved. A loop construct, no matter how useful, seems to fall short of that.
The for loop example was just to illustrate the difference between function and macro, a complete DSL would be something like making prolog within a Lisp or other examples in racket [1], or for example a special configuration file for electric circuits within the language, or a SQL like interface for manipulating data (you can write LINQ using macros).
It's not a direct comparison of course (nor it is a Lisp), but here is an example of linear optimization in a library that uses macros to make it a DSL closer to the description (in Julia @ before a name means it's macro, so it's easy to see) and one that uses methods:
If in the Julia example they if @variable was a function, then x >= 0 would have been evaluated immediately and it would fail since x was not defined (and if it was x >= 0 would return a boolean). To emulate that you'd probably have to pass a string "x >= 0", which the function would then have to parse (it would be a DSL as well, but one you're writing from scratch), the difference here is that you can just use the language parser directly and compile already with the result.
You usually use macros to change the semantics of Lisp:
(foo (bar) (baz))
If foo is a function, then (bar) will be evaluated, (baz) will be evaluated and then (foo X Y) will be evaluated where X and Y are the results of evaluating (bar) and (baz).
If foo is a macro, then none of the above is necessarily true. Macros let you implement new control-flow constructs, which may be necessary for a DSL (or at least a low-boilerplate DSL; you can always make ugly control-flow by wrapping every expression in a lambda, but the idea is to make something easier to write, not something harder to write)
It's a fairly well accepted recommendation to not use a macro when a function will do[1]. Everything bad people say about macros applies equally to functions, the dial is just turned up a bit; they both abstract behaviors by hiding their implementations.
The advantages is that this reduces mental load when reading code; imagine if you had to parse a block of code and say "oh yeah, that's just a simple median of 3 quick sort" every time instead of just (sort ...). That would increase your mental load both when reading and writing.
On the other hand, when something goes wrong in sort (even if it's not a bug in sort itself, maybe some garbage was passed in), the fact that it's a function call actually increases the mental load when debugging. Good tooling that lets you print stack frames and such really improves things.
The same is true for macros, but things are worse in both directions since macros are more powerful. You can write better abstractions to decrease the mental load even more, but when things go wrong, more things can go wrong because macros are less constrained than functions.
Again, good tooling can go a long way to reducing the debugging pain. Stepwise macro expansion is a big win and being able to do it in-place is even better.
1: With the exception that lispers will use macros to prevent requiring explicit lambdas. For example, the WITH-FOO macros common in lisp can all be written using lambdas, and it would even be idiomatic to do so in many functional languages. From what I can tell this is originally because of the extra computational expenses of using lambdas, but persists because the syntax is more uniform with LET and friends
I was a teenage Lisp enthusiast. One thing that's struck me as I've gained more professional experience is that writing the code isn't actually the difficult part of software engineering. It seems that way for personal projects, or if you're just getting started with coding.
You comment is interesting. In the first sentence Lisp is a good solution against complexity. In the second one it is not.
A good litmus test for complexity is the following: as a developer analysing software in order to understand & modify it, how many lines of code do I have to read in order to grasp what is going on and what can I do to solve my problem?
In my opinion this is the main goal of a good software architecture. The advantage of lisp is that I can create a DSL like library that precisely describe my domain. The issue is that because macros can have dramatic effects on the final program, I have to check carefully all of them in order to understand a piece of code. This and the inability to look up quickly what fields are inside an object or the exact api of a function.
A common base language and static types are the best tools I know to create useful boundaries. Another way is to split the software but this comes with other issues.
The use of the term "powerful" might be a bit misleading. Powerful is good. Here powerful refers to "easy metaprogramming". Metaprogramming is something you need in an ecosystem, but you need to hide it a bit so that you only use it when you really need it. I find the lisp community a little bit too proud of their metaprogramming capabilities.
> The issue is that because macros can have dramatic effects on the final program, I have to check carefully all of them in order to understand a piece of code.
Do you have an example where you had to carefully check a macro when writing your Lisp code? I have never looked into macro code more closely than function code when writing Racket programs, and mostly I don't know or care if whatever I am calling is a macro or a function.
> This and the inability to look up quickly what fields are inside an object or the exact api of a function.
At least within Clojure and Racket, you have IDEs like Cursive and DrRacket that will show you function documentation and do code completion. Or are you saying that the presence of macros alters this somehow?
"how many lines of code do I have to read in order to grasp what is going on and what can I do to solve my problem?"
I'm not sure that lines of code alone are that useful here, or APL would be everyone's idea of a perfect language, since it can express so much in so few lines.
The Lisp community in particular seems to value verbosity over terseness, preferring long, descriptive function names and variable names, which make for more lines of code, but arguably greater readability.
I personally value clear code far higher than terse or clever code. I'd much rather read over a page of easily understandable code in 5 minutes than puzzle over a single line that does the same exact thing for an hour.
Even with the long names, Lisp code can be very compact, because the language is very expressive. In my experience, You can implement the same functionality in Clojure or in Java, with the Clojure version being 3x to 5x smaller than the Java one. This does not necessarily hold for all domains and all code, but it is often the case.
Exactly. I never understood the "Lisp curse". Every language allows one to write overly complicated, bad code. Lisp just makes it easier due to homoiconicity and macros - but nothing is keeping a company from conducting code reviews and having code standards, like for every other language.
because Lisp allows you to define language-level abstractions that affects the control flow. Those "abstractions" are always leaky and everyone has to understand their implementation to be able to work with them or read code that uses them.
Other language limit what you can do with abstraction. You get libraries with less nice api's but you have less digging to do to understand a piece of code that make use of them.
Disagree. "Control flow-affecting" abstractions aren't really anything special or extra difficult. You always have to understand what the arguments of a function/macro call mean. And random macros don't just randomly leak project-wide control flow decisions.
Inversion of control is used on large teams and at it's best it is IMO more leaky and harder to understand than well written control-flow macros in lisp.
Because they're custom, and given enough people on a team everyone has to deal with lots of custom abstractions. I'm not talking about adding a WHILE-loop, it's really the least interesting use of macros from my perspective.
Kind of. SBCL will flag type errors at compile time if it can find them, but there isn't any practical way (that I'm aware of) to force it to accept only code that has sufficient type annotations that it can be shown to be type correct at compile time.
Racket has typed racket, but then you are getting into pretty obscure territory. You might be able to have strict standards for your own code, but you'll still be plugging in to a dynamically typed ecosystem.
I don't find compile-time type checking useless in either of those languages. It's relatively verbose, given the lack of type inference, but it still catches lots of errors at compile time that would otherwise be runtime errors.
I have doubts that the powerful tools that Lisp provides don't work in large teams. OOP was declared the default abstraction for managing software in large teams but there were still a lot of unmanageable code, but still people kept using it and pooling their experiences, creating innumerous design patterns to handle each limitation, and through "natural" selection the do and don't of OOP became more and more clear.
Lisp on the other side not only had less of that collective experience due to becoming less popular, but it was also famous as the language that gives super powers allowing programmers to do 10x more, so the experience was also biased for single dev performance. The Lisp Curse is a cultural problem, not a technological one, you don't need to reinvent stuff just because it is easy (and fun).
I'm optimistic though since the new generation of languages (closure, elixir, julia, nim, rust) is increasingly going against the rooted belief in OOP (stuff like inheritance) and incorporating more Lisp features like macros, code as data and everything as expression. This means more and more large dev groups will have access to the tools and reason to make it scalable in order to get a little of that super power under control.
Have to give Lisp credit on pushing the importance of engineering. In a time when FORTRAN spaghetti or C reigned supreme, lisp pushed for programmer-friendly abstractions and hiding complexity in black boxes. In a time of thinking about algorithms, lisp pushed to think about abstractions.
If types are your thing, shen lisp is an interesting language. Lisp with sound types (to the point of literally embedding sequent calculus in the language) and others traditional functional features like pattern matching.
Why would you think I'm implying something that's obviously false?
I just think the choice of programming language (unless it's pathologically bad, e.g. writing device drivers in Python) is a minor factor in real world software engineering.
> is a minor factor in real world software engineering
I don't know about that. A language is not merely a language. It comes with the entire eco-system, libraries, community etc. Not to mention your personal expertise and preference.
I am certain depending on your language selection, your development experience and the quality of output will vary dramatically.
For example Java, Javascript, php, Go, python, ruby are all valid choices for a web application. Depending on what you choose, your experience will be different no?
Decades ago I was quite enthusiastic about macro systems, but the experience of TeX programming made me realize that sometimes plain old functional or applicative abstractions are better than macro based ones no matter how domain specific they are.