That's a common misconception that Lisp macros are mostly used to 'delay' evaluation.
What Smalltalk calls 'blocks' are just (anonymous) functions in Lisp. Books like SICP explain in detail how to use that for delayed evaluation in Lisp/Scheme:
> misconception that Lisp macros are mostly used to 'delay' evaluation.
It was in the examples. No generalization to all of LISP was made or intended, though it would be interesting to make a survey study of actual macro usage by developers.
It is also clear that you actually need the delay-evaluation mechanism in order to get something like "if" out of the language and into the library.
> what Smalltalk calls 'blocks' are just (anonymous) functions in Lisp
"but blocks/anonymous functions seem quite different from alternate argument-passing mechanisms."
I think the history also explains why Smalltalk blocks were a bit odd as anonymous functions go: they did not start out as a "function" mechanism, they started out as just a delayed evaluation mechanism, and Smalltalk has no ordinary functions, just the anonymous kind. Which is also odd.
The Smalltalk developers were well aware of LISP, cite it as a major source of inspiration in fact. They initially had a more macro-ish mechanism (the unevaluated token-stream), then moved to a pure delayed-evaluation mechanism which then turned into blocks. They never added a macro mechanism again. I think that's interesting. YMMV.
- The non-selected possibility isn't evaluated/executed (i.e. the evaluation of the possibilities is delayed until after one has been selected)
This is very important for imperative languages, since executing both branches would have observable effects even if we only returned one result.
This is also true in functional programming languages, even pure ones, since it's needed to e.g. terminate recursive calls. For example, if we define factorial by branching on whether the argument is 0 (base case) or not (recursive case), then evaluating both branches before selecting one would cause an infinite recursion for all arguments.
In the case of your Lisp code, we couldn't implement a recursive function like factorial unless we delay the evaluation e.g. by using macros to do the selection, or by switching from Lisp's default evaluation order (call-by-value) to something non-eager like call-by-name or call-by-need.
Yes. Obviously. Of course. That's how Smalltalk does it, though it doesn't have general functions, it only has anonymous functions aka (roughly) blocks. Any other language that can pass functions or some sort of equivalent as arguments can obviously also do this.
(A little surprised that this compiled on the first try. More surprised that clang didn't show any warnings, even with -Wpedantic. Shows you how well I understand C/C compilers ¯\_(ツ)_/¯ ).
And function definitions, lambdas, blocks etc. all delay the evaluation of their bodies.
> they did not start out as a "function" mechanism, they started out as just a delayed evaluation mechanism
Do you have a citation for this? Not disputing it, I'd just like to see whether there's something in Smalltalk's history I have overlooked, or didn't notice on my first reading.
That was what the article was about. The scans link to the source material Smalltalk-80: Bits of History, Words of Advice, one to the book on Amazon, the other to a PDF scan.
Quite frankly, I was a bit surprised so much of the discussion has been about LISP features that are neither in dispute nor subject of the article.
> they did not start out as a "function" mechanism, they started out as just a delayed evaluation mechanism
I still don't see where you are getting this from. The scans you provide [1] don't make this claim. For instance, look at page 17 of Technical Characteristics of Smalltalk-76 [2]:
> heights _ students transform each to each height
The document seems to have some OCR problems - I'm pretty sure that underscore is meant to be a left arrow, or := in more modern Smalltalks. I understand this to be the equivalent of Smalltalk-80's
> heights := students collect: [:each | each height].
which is clearly a mechanism for mapping a function over a collection, not just a delayed evaluation mechanism.
[1] You might add to your article a footnote specifying that the citations from Bits of History, Words of Advice are from pages 14-15
> Quite frankly, I was a bit surprised so much of the discussion has been about LISP features that are neither in dispute nor subject of the article.
It's because Lisp macros and their motivations were described inaccurately, and some people want to make sure that these inaccuracies are not perpetuated.
When an FEXPR is called, it gets unevaluated args and can then at runtime decide what to do.
Macros OTOH are a different mechanism, where expressions get rewritten at macro expansion time, which can be for example at compile time. The macro then gets called with arbitrary expressions it encloses (those don't need to be valid code by themselves) and computes new source.
Thus macros are code generators from expressions. In a compiled implementation, macro expansion is done with compilation interleaved: each form gets expanded, even repeatedly until it no longer expands into a macro, and then the resulting non-macro form gets compiled.
Thus in a way a macro does not delay execution, it does the opposite: it actually shifts computation to compile time -> the computation of code from expressions and the computation of arbitrary side effects in the compile-time environment.
In an interpreter version of Lisp, the macro gets also expanded at runtime - but in its own macroexpansion during evaluation. There eval will call the macroexpander repeatedly until it gets a non-macro form.
"The primary purpose of the DATA statement is to give names to constants; instead of referring to pi as 3.141592653589793 at every appearance, the variable PI can be given that value with a DATA statement and used instead of the longer form of the constant. This also simplifies modifying the program, should the value of PI change."
Or, quoting the other aphorism, what is constant for someone is a variable for someone else.
What is run time for LISP (which lacked good compiler at early stages - the famous two horse asses width constraint) can freely be a compile time for other language. Or for a LISP, for that matter, in our time.
The compile time expansion allows for better (because faster) error checking.
I don't think what I wrote is a FEXPR, which doesn't evaluate the result of the function call. I'm having a hard time parsing what you wrote or any of the sources you linked in such a way that says the semantics of macros differs from what I wrote (certainly the implementation gets a lot more complex and there are subtleties).
CL-USER 36 > (test)
(:RUNTIME :B 21 :C 42) ; <- the printed output
(:RUNTIME :B 21 :C 42) ; print also returns its arg
Thus all the macro expansions have been done at compile time and we have generated some code there. No macro expansion at runtime.
Thus it has to do with code generation and code execution at macro expansion - nothing about 'delaying' something.
The example isn't useful, but imagine a macro INFIX
(infix a * b + c)
which rewrites the expression to the Lisp expression:
(+ (* a b) c)
There is nothing about 'delaying' -> it's just rewriting the form. Ideally at compile time. There are many other examples which do something different.
While experienced lispers already know this, it may not be clear to everyone that in Common Lisp "compile time" routinely happens repeatedly in a running image. cl-ppcre[1] relies heavily on this. It can take a regexp pattern as a runtime input and compile a recognizer down to machine code on the fly. This technique gets used routinely.
Edit: In fact, I strongly suspect that this resulting unsuitability of Lisp for proprietary code played a major role in its commercial failure before the widespread advent of services.
A normal function call evaluates the arguments and then calls the function.
Because macro expansion does something before evaluating the arguments, you can say that evaluating the arguments has been delayed.
I feel this is just looking at the same thing from two different directions.
Of course, macro expansion does _more_ than just delaying the evaluation of the arguments, and if people say that macros delay evaluating the arguments, you might think that's all they do.
> Because macro expansion does something before evaluating the arguments
It does something independent of evaluation. When the code gets compiled, the macroexpander already has transformed the code. The code might never be evaluated in this Lisp. It might be written to disk and later be loaded into another Lisp.
If something does not get evaluated, gets evaluated later, gets always evaluated or never -> that depends on the generated code.
Thus 'delaying' something is the wrong idea and it limits the imagination of what macros are used for. Think of 'general code transformation', often independent of execution in a compilation phase.
A macro or FEXPR may never evaluate an operand at all. They simply receive the unevaluated forms and decide what to do with them, with explicit evaluation being just one option which can be performed on the operand.
Delayed (lazy) evaluation merely prevents an operand being evaluated at the time of call and until you want to read its reduced value, after which it is automatically evaluated.
But an infix transform does delay the evaluation of expressions you've passed to it. Eg (infix (avg b) * (sgn a)).
And most practical uses of macros involve passing in expressions that will be later evaluated verbatim, e.g. (time (reduce + (range 100)))
I was only intending responding to this:
"That's a common misconception that Lisp macros are mostly used to 'delay' evaluation."
Which just seems to me confuses the matter in response to the article that was posted. At a very low level, macros operate by having the evaluation of their arguments delayed. At a higher level, they're used to implement sugared forms which usually contain valid code (though not necessarily code that makes sense in isolation).
I know you're very experienced with lisp, and I'm not trying to correct you technically, but to me this stuff only clicked when implementing a metacircular evaluator and learning when and how to perform evaluation. The blog post to me reads like the author was going through a similar process.
> But an infix transform does delay the evaluation of expressions you've passed to it. Eg (infix (avg b) * (sgn a)).
It doesn't 'delay' anything. It just rewrites at compile time
(infix (avg b) * (sgn a))
into
(* (avg b) (sgn a))
That's all. At runtime only the rewritten statement will be executed.
> At a very low level, macros operate by having the evaluation of their arguments delayed
Not at all. The main purpose of macros is to compute code from arbitrary expressions at compile time. There is no 'delay' -> it's mostly about rewriting expressions at compile time.
> implementing a metacircular evaluator and learning when and how to perform evaluation.
You can build a macro expansion phase into an interpreter. That's what Lisp interpreters do. But the main purpose of macros is to enable code transformations at compile time and thus only to have a cost at compile time, not at runtime. In an interpreter the macro expansions would also happen at runtime and have a cost there. But Lisp compilers are popular and to make the most out of code transformations at compile time, the idea of macro and their expansion at compile-time was introduced. Thus one gets for example new embedded languages, without the need to parse them at runtime.
I believe that if you're trying to understand what macros do from a fundamental level, treating them like functions with different semantics is a very good learning method. You implement them in a SICP style interpreter and then write a few. Totally clears up the mystery.
I'm not arguing with you about what happens in any production lisp. I'm sure you know more than me, but I do understand how they're implemented in practice, and the warts that brings - eg no (apply or <list>), and first class macros are so esoteric purely from the runtime cost that they're barely even written about in the literature.
I'm just saying the mental model of where macros fit in in an interpreter has value, and that view has not changed. In my opinion you're an expert coming in correcting a bunch of people attempting to learn the basics, and you're not wrong, but you are muddying the waters.
treating them like functions with different semantics is a very good learning method
If you mean run-time functions that act a bit funny about evaluation order, that mental model is incompatible with common introductory examples like `let`.
No, it's not. You eval the return value of the call. There is no loss of expressive power. Compile time macro expansion in this model is simply partial evaluation.
The first argument is not evaluated but used as data, the second argument is evaluated after the macro function returns, as part of the macro evaluation rules, i.e. (eval (let '((x 3) (y 5)) '(+ x y)))
In conclusion, it's working exactly as I laid it out in my original post:
fn: (a b c) => (call a (eval b) (eval c))
mac: (a b c) => (eval (call a b c))
Everyone jumped on it, as production lisps don't implement macros this way. But it's the natural way to implement them in a SICP style metacircular evaluator, and a good way to learn them.
This is about mental models, not implementation details. And in my opinion this thread is full of experts who have forgotten what it's like to not understand how macros work.
The sequence of events you describe is a thing that functions do not do at run time: they do not operate on pieces of syntax. Macro expansion time is a different series of events than run time. Macros operate on pieces of program syntax, not on anything from run time.
Everyone jumped on it, as production lisps don't implement macros this way. … This is about mental models, not implementation details
It looks to me like you're the one who brought up implementation details, talking about what happens "at a very low level." You're the one who has presented a questionable implementation strategy as a mental model and has been corrected on it multiple times by multiple people. Everyone else has been describing the semantics of macros. Having separate timelines for expansion and execution is a semantic distinction, not merely a common implementation strategy.
And in my opinion this thread is full of experts who have forgotten what it's like to not understand how macros work.
Your confusion is because you insist on a mental model of macros that is incompatible with their semantics. Letting that model go is necessary in order to understand macros. Otherwise, refusing to distinguish expansion time from run time, you are working towards reinventing fexprs.
SICP is a great way to learn lisp and learning macros by extending the evaluator is a natural progression.
I don't care how many lisp experts are "correcting" me on the way it works in the real world, because:
a) I actually do understand how they're implemented in existing production lisps. No, really. I am not confused. In my own toy lisp compiler, I rely on first class macros and am attempting (and so far failing) to use partial evaluation to make the implementation cost somewhat sane. I am under no illusions this is normal.
b) Being an expert doesn't automatically make you good at teaching (often the correlation is reversed), and the article being commented on was a tutorial on macros.
The smug lisp weenie trope exists for a reason, and it's that the lisp community is too busy falling over themselves to prove how smart they are with arcane knowledge to give a shit about the experience beginners face. Clojure gets no end of flak from CL users who bristle at the suggestion that it even be called a lisp, but the community actually gets it and does a great job of teaching.
I have an opinion on how best to teach macros. Maybe I'm wrong. Maybe it's not a good idea to extend SICP with macros, or maybe there's a better way, and I'd love to hear arguments in those directions. But correcting it by explaining how they're implemented in CL is absolutely baffling to me.
SICP is a book for beginner computer science students, with a focus on functional programming as base.
As a Lisp book it's not really good, since it covers only very little actual Lisp and uses only a core Scheme language.
There is literally nothing about macros in SICP. They are mentioned only in a few places (for example it is mentioned that DELAY would be implemented by a macro).
> The smug lisp weenie trope exists for a reason, and it's that the lisp community is too busy falling over themselves to prove how smart they are with arcane knowledge to give a shit about the experience beginners face
Since you ignore the large amount of literature, teaching & research on macros in the Lisp community, what you write is rather strange. Surprise: people teaching Lisp in the 60s already 'gave a shit about the experience beginners face'.
> I have an opinion on how best to teach macros. Maybe I'm wrong. Maybe it's not a good idea to extend SICP with macros, or maybe there's a better way, and I'd love to hear arguments in those directions. But correcting it by explaining how they're implemented in CL is absolutely baffling to me.
Lisp teachers usually taught macros in context of actual Lisp languages and actual usage examples. Personally I prefer that as a first approach, over trying to explain how to extend a toy implementation with some homebrew macro implementation.
That's fine; I prefer the latter. Maybe I'm wrong. But if I am, it's not because I don't understand how macros are actually implemented in the real world, as everyone in this massive thread seems to assume.
I regret my initial comment, which was written in haste. Had I known the sheer volume of people that would jump out the woodwork to demonstrate their superior technical knowledge while avoiding the argument I was actually making, I'd have been a lot more careful about what I was trying to communicate (or more likely not have bothered, which would have saved us all a lot of time).
That is where the "smug lisp weenie" trope comes from, and it's real. I don't know what it is specifically about lisp that brings it out; I can't imagine having this discussion about C#, TCL or SQL.
Like I said previously, I don't believe being an expert automatically makes you a good teacher. Based on the responses in this thread, I doubt anyone here is actually good at it.
Were I disagreeing technically, it would be a different matter and I'd have long ago hit the books to see where I fucked up. The combined lisp experience on this thread is at least an order of magnitude more than mine, and most of you are probably smarter than me to boot.
You can't teach "macros": you have to choose and teach a specific macro system (at a time), whose behavior is linked to the way(s) it is implemented. Each has its own mental models; there is no single mental model for all macros everywhere.
Which is just as effective an argument against the entire of SICP. In a real lisp, a (list? <some-fn>) doesn't return true. You can't dig around inside a closure and walk the entire environment which is made up of a list of pairs.
But the point of a toy model is that you can read an implementation, build your own, reason about it and send code through it with little difficulty.
Once you've built eval and sent real programs through it, it's not a big jump from that to understanding how real tools work, which can generally be done incrementally.
That was certainly my lisp journey, and that includes macros (which were a separate step). After implementing them in eval, learning how they're in the real world was about as challenging as reading a restaurant menu.
> Everyone jumped on it, as production lisps don't implement macros this way. But it's the natural way to implement them in a SICP style metacircular evaluator, and a good way to learn them.
I don't think it is overly useful to explain people how to extend a SICP style metacircular evaluator with some custom macro system. It's often a part of the Lisp journey, but not something I would recommend for basic Lisp teaching.
If I want people to program in Lisp, I give them a real Lisp and teach them how to program with it.
> This is about mental models
Sure, but you are teaching them the wrong mental models (macros tied to an evaluator), while I would focus on the model of general code transformations and the tools to make effective use of them.
> And in my opinion this thread is full of experts who have forgotten what it's like to not understand how macros work.
Not sure who you talk about, but I have been explaining macros to people already in the 80s.
What are lisp macros for if not to delay evaluation?
One class of use cases not covered in the blog post is that a variable appearing as a function's argument can only be a bound occurrence, a use of some name already in scope. A variable appearing as a macro's argument can be the binding occurrence of that name. This is why (using Racket examples) match, define-struct, for/list, and such things aren't functions.
I think it's "'delayed evaluation' + 'transparent internal structure'". So delayed evaluation isn't enough, because the only thing you can do with N closures is to call them in any order, you can't create a complex new expression dependent on the substructure of the closures.
The closest thing to a macro in (eg. python) imo looks like this.
Instead of
d = c * (a + b)
class Adder:
def __init__(self, left, right):
self.left=left
self.right = right
class Multiplier:
def __init__(self, left, right):
self.left = left
self.right = right
d = Multiplier(Adder(a, b), c)
Fill out the __call__ function and/or create an evaluator that recursively calls each class instance with it's parameters and you've made a system that works a lot like a lisp with a macros. Now you can do broadly /anything/, eg. replace every instance of an adder in the call tree with an adder+1. That's using the second required element of my description of macros- the complex substructure, to perform arbitrary transforms. You can't do that with just closures, unless they make their bound parameters accessible (and normally they don't directly, in python they do because it's absurdly dynamic). Of course this is not at compile-time, it's at "delayed execution time", but the difference is kind of slight imo. The benefit of a real lisp over doing this in python is that you don't need to rewrite every fn, operator, control flow statement into your own abstracted level by hand, and it's much faster. Also you can complete the loop and trivially write it back down into eval-able source code text.
Take a macro like (infix x + vx * dt). It's not equivalent to calling some function with thunked arguments -- it has to analyze the literal text of the arguments to turn this into (+ x (* vx dt)).
Macros determine what requirements apply to the processing of syntax. Which parts of that syntax, if any, are evaluated according to what rules, in what environment.
i'll defer to lispm's expert opinion, but i always thought of macros as _conditional_ evaluation. `if` is a special form because it does not evaluate the branch not taken. it's tricky to implement `if` without using `if` or `cond`. macros make it easy to implement your own special forms. in your example, you might want to implement `first`.
(first b c) -> (eval b)
i can't think of a way (but it might be possible) to not evaluate c with a regular function. first is trivial and dumb, but it's obvious how to turn that into short circuiting `and` or `or`.
Macros are more general than that; they are generalized expression rewriters. `if` is tricky to implement in general because you need a primitive to select an alternative based on whether a given value is truthy or falsy. Let's pretend we have such a primitive that's not a special form called `call-if`. Then you can implement `if` in terms of `call-if` by writing it as a macro that transforms this:
(The semantics of `call-if`, by the way, are exactly the semantics exposed by the Smalltalk ifTrue:ifFalse: family of methods.)
But macros can do more than merely wrap things in lambdas, which alone is sufficient for delayed and conditional evaluation. They can quote identifiers that are passed into them, allowing you to write new defining forms and even introduce new bindings. A nontrivial, but understandable, example is the `define-record-type` special form from SRFI 9, which can define a new disjoint record type and a constructor, predicate, and field accessors for that type all in one form. If you are willing to break hygiene and have `defmacro` or something like `syntax-case` available, you can even introduce new identifiers from out of nowhere, and write something like Common Lisp's `defstruct` which lets you define a type and automatically derive names for the constructor, slot accessors, etc. from the names of the type and slots.
Macros are a tremendously powerful, general code generation and rewriting tool, built right into the language. Underestimate them at your own peril!
aww, now i'm embarrassed about my lack of imagination. I have to admit i forgot about the magic you can do with define.
Excellent point. i kinda sorta think defun in common lisp injects the new function at top level regardless, but in scheme it's scoped. So in CL you could probably get away with a simple lambda (or function) to make your struct syntax, scheme would need the macro to put all of that stuff at the top level.
In Common Lisp, DEFUN runs side effect code at compile time. For example it notes the function in the compile-time environment, it may record its definition/the location of the definition, ... It also may rewrite the code it defines: for example by adding some declarations, etc.
Smalltalk blocks are anonymous functions sure, but the evaluation is delayed - you have to pass the message #value (and its variants) to it. This not the case in Scheme and maybe Lisps, where evaluation is eager.
Passing Smalltalk blocks and Scheme/Lisp lambdas into functions have different behaviours.
With a bit more effort, we could also parse a parameter list.
But Lisp does not go the route of making IF a function, because it typically provides three different types of language expressions and IF then is a special operator:
1) function calls
2) macro forms
3) a small set of special operators, which are implemented as built-in functionality. One of them is a core conditional operator. More complex conditional operators then are implemented as macros, which expand eventually into the core conditional operator. The interpreter and compiler will have to specially recognize and implement these special operators.
Okay, but if you have macros, you might as well use them. The idea is more about exploring what can be done when designing a less powerful language, without macros.
> The idea is more about exploring what can be done when designing a less powerful language, without macros.
Lot's of things. But the language won't be able to easily compute code at compile time, which a main purpose of macros. Lisp has a simple data format for source code and thus code transformations are relatively simple to do and even integrated into the language.
Considering the username of the person you are replying to, you might consider that they are very likely to have implemented macros in a lisp.
In common lisp, defining a macro defines a function that receives as its arguments the unevaluated forms passed to it plus (optionally) the lexical environment. This part is indeed "essentially a lambda without argument evaluation."
The magic isn't there, the magic is in the interpretation of the value it returns. It can return any arbitrary lisp forms. This allows macros to do far more than just delay evaluation. Pretty much anything that involves code walking is not possible with lambdas, for example. Some setf expanders would be impossible with lambdas as well.
> If you ever try, you'll quickly find out that a macro is essentially a lambda without argument evaluation. That's it.
> If we take another look on it, yep, that's a form of delayed evaluation of lambda parameters, however I would prefer the term "delayed expansion".
To me, 'delayed evaluation' implies something like call-by-need (e.g. Haskell) or call-by-name (e.g. Algol60), where we can't distinguish an evaluated argument from an unevaluated one, since any attempt to 'look at' (i.e. branch on the value of) the argument will force it to be evaluated. For example, if `(expensive-calculation)` evaluates to `42`, there could be no difference in the behaviour of `(my-delayed-function (expensive-calculation))` and `(my-delayed-function 42)` (although there would presumably be a difference in running time).
In contrast, Lisp macros let us inspect the AST without forcing evaluation, so we can distinguish between these two values, and hence `(my-macro (expensive-calculation))` might behave completely differently to `(my-macro 42)`.
What Smalltalk calls 'blocks' are just (anonymous) functions in Lisp. Books like SICP explain in detail how to use that for delayed evaluation in Lisp/Scheme:
https://mitpress.mit.edu/sites/default/files/sicp/full-text/...