Lisp macros, of course, know nothing about types, because, in old-school Lisp, there are, lexically, no types. But in Rust, at expansion time, types are usually absolutely nailed down. (...Unless the macro is expanding to a generic definition; but even then, there are named types, you just don't know what they would be bound to).
Or maybe they do understand types, now? I haven't checked lately.
Rust macros understand types in the sense that they understand which parts of the syntax tree are specifying a type and which are expressions, but I think it's fundamentally impossible for them to understand the actual types of variables.
Of course it is easier to talk about this than to design and implement it. Considering embedded and mutually recursive expansions, the Halting Problem rears its head, and there may not be a fixed point, or anyway none in the immediate century. But those would be exceptional cases.
Consider that rust macros are allowed to have side effects and access outside resources, they aren't pure functions. I'd argue that any system that runs them repeatedly in an attempt to find a fixed point is either incorrect (backwards compatibility wise), or an all together new macro system that happens to run in parallel to the current one and call itself the same thing (debatable, I know). Nor can we simulate the proc macro and only let the side effects occur once we reach a fixed point, because the proc macros behavior can depend on the values returned by those external systems.
The halting problem is arguably not a theoretical issue, since the type system is already turing complete (up to arbitrary limits on recursion depth that could exist here to).
Still, the types of many named entities will typically be knowable. So, a macro system that can provisionally operate on types when they are known, or even that supports macros that require that the types of certain names be known, seems doable, in principle. It would need a good usage story to get any traction.
Nowadays anything that's a bit more complex should almost certainly use proc macros instead, which allow much saner implementations than complex, recursive macro shenanigans.
[ https://github.com/m-ou-se/nightly-crimes nightly-crimes! blows away your compiler, running it again in a new environment where it will allow nightly features even though you've got a stable compiler installed... ]
That macros have access to the entire language including arbitrary IO is the defining feature of proc macros (not without controversy). The insanity here is the Rust compiler team adding the `RUSTC_BOOTSTRAP` env var which is a hack used to bootstrap rustc using a stable compiler despite the nightly features used by the codebase.
All nightly-crimes does is use `std::process::Command` to rerun the compiler with the variable set , which tells rustc to throw all concepts of stability out the window.
I haven't been following developments but one of the ideas (even has a PoC iirc) was to distribute and run proc macros as web assembly to improve build times and prevent such shenanigans.
night_crimes! must be possible unless/ until sandboxing is implementing for proc macros and so the insanity is the same regardless of how it's possible. Which is fine, but like I said, these are big guns, not good to use just because they're tidier than a couple of declarative macros to get your job done.
There are things that can be done to improve proc macros, like executing them in a WASM runtime, but currently they should only be used when necessary.
I've written macros in asm, in TeX, in Lisp, and even in TRAC (a macro based programming language invented by Calvin Moores in the 60's, see ). The most impressive macros I've used are found in the fantastic LaTeX graphics package named tikz. The manual for tikz is 391 pages long. This amazing graphics package is implemented in LaTeX and TeX macros.
So if tikz is so great, what's wrong with macros? The tikz package is great in spite of being implemented in macros. Looking over its implementation, I'm stunned that its developers were able to achieve it.
Macros are used because they abstract a mechanism that programmers would like to use, but macros easily become leaky abstractions. They can easily have semantics that depend on or affect other program elements that are not visible to the programmer. Macros can be written using other macros; macros can define new macros. Consequently, even simple macro systems (like the C preprocessor) are capable of doing any kind of a computation . These machinations are not visible to the programmer using the macros so macros make accurate reasoning about the correctness of a program more difficult in the same way that subroutines having unrestricted access to global variables make correctness harder to (informally) verify.
Abstraction has proven to be an essential means for constructing programs. No one wants to program in an environment where goto's in a flat namespace replace all functions. I feel like macros are a flawed mechanism for abstraction; I would rather see the language syntax expanded to accommodate the desired features than to have them implemented via macros.
As impressive as the capabilities of macros are , wouldn't it be better to rely on functions as the primary abstraction mechanism used by the language's programmers.
Am I wrong?
Procedural macros can run arbitrary rust code and can therefore do anything, but in general they don't -- and if you work within the interface you're given then the result will be hygienic.
There's a sense in which macro hygiene is analogous to Rust's safety guarantees -- the compiler will normally guarantee the abstraction, but there's an escape hatch where we rely on the person implementing the code that uses unsafe or implementing a procedural macro to not break the abstraction for their callers. A leaky abstraction is considered to be a bug.
Question to experienced Rust people: What are some cool use-cases that you've seen for Macros?
I've used ActiveRecord and Django's ORM, and neither provide anything like this.
The closest I've seen is SQLModel, a library on top of SQLAlchemy built by the author of FastAPI.
However, last I checked, the kinds of queries that Diesel allows are more restricted than that of Django or ActiveRecord. Of course , you can always write custom SQL for things like that when using Diesel.
I believe also that Diesel is either maintained by or was created by the same peeps behind ActiveRecord. I also read somewhere that a running joke among them is that they learnt what not to do in writing an ORM through ActiveRecord.
A question I have for y'all is: can anyone make a comparison between Diesel and TypeORM?
For a game it could fit in some part but you would be very limited by the ecosystem so I would also not recommend except if you really know what you are doing and have significant resources.
For low level system programing it is pretty nice, and maybe the best alternative right now.
For a compiler I would say it depends if performance is the main priority, in that case yes, otherwise no.
There are plenty of other cases of course and for each the answer would be different.
It is not really a general purpose language that you could use without worry for everything like Python, at least not yet.
Almost always when I use python libraries I have to get used to a new documentation format, learn how to navigate it and so on. And then it has to be detailed enough to make up for the lack of type signatures. I cannot tell you how often I have to skim a significant portion of my dependencies' source just to use them. (Though there are counterexamples, stdlib and numpy in particular actually have okay docs)