Actually a nice idea. Surely something most devs wish to have from time to time. Though, it's also dangerous and can lead to even more unreadable code. Linter will probably also have "fun" adapting to this. And debugging this..
Well, anyway, personally I don't like the syntax. Putting a sign at words end feels just not pythonic. The Idea itself I favor more than I dislike it, as the advantages seems to outweight the disadvantages. As this is a PEP, I assume this will come in any case?
A PEP is just a proposal. The index (https://www.python.org/dev/peps/pep-0000/) lists about 170 PEPs that are abandoned, withdrawn or rejected.
This worries me. If you're debugging a macro, you have to remember to increment the version every time you change it, or your changes mysteriously don't take effect. And you have to make sure your macros are clean and deterministic or you might bake things into the cache that don't belong there.
> The syntax tree of Python is available through the ast module.
That module can be annoying to work with because it doesn't make stability guarantees. I've even had to deal with code that broke in a maintenance release (3.8.4) because of this change: https://bugs.python.org/issue40870
I can't think of any better alternatives, besides doing nothing. But it looks like it'll be unusually easy to use wrong, not to be used lightly.
Still—one more thing you have to remember.
The issue with adding them to Python is that Python is already clean and easy to read (for the most part). They point out Python's simplicity being one of it's biggest strengths, and I think macros will help add to the clutter on top of Python. I also just don't think we need them in Python. All of the examples they gave were underwhelming and seemed like they didn't offer a really good way to convince people that this is a worth while feature.
I guess we'll see how things play out. If it gets approved and implemented, I assume that they've found good uses for it.
I've wished that Python had something similar to C#'s LINQ, where a user could express a query using familiar list comprehension syntax, but instead of the comprehension actually evaluating, my search library could somehow receive the AST and use it to build an optimized query. ORM libraries often play tricks with dunder methods to achieve a similar DSL feel. This could give me something very close to that.
I'm also drooling over the parser example, as someone who maintains a parsing library. I support building a parser from a a grammar file currently because doing it in code is a bit clumsy, but the example makes it look pretty.
There's also countless times people have had a great idea for "with" blocks, but it turns out it's not really possible because it would require the with statement to analyze/capture the child statements inside. This proposal gives you exactly that.
The syntax isn't spectacular, but for the most part I can't think of anything much better. I do think the "sibling statement" syntax will be hard to understand visually. I would make it start with a @ since it's like a decorator for the next statement:
I can think of a bunch of things over the years where I did exactly that on first discovering it.
A small warning though: I write a lot of perl, and while I care a lot about readability for the next developer, plenty of people fail at the "pull it back" phase so it's definitely a trade-off.
Macros are generally useful because they A) make something more succinct, B) improve performance by doing something at compile time instead of runtime, or C) enable metaprogramming. The downside is that they risk the creation of unreadable or overly magic code.
Python already is succinct and already has metaprogramming (although not to the extent of Lisp, as pointed out by the rationale). And we all know Python is slow, but the downsides of making it less readable through macros outweigh the upsides IMHO. Python is loved for being readable. Instead let's improve on its general performance.
I have to write lots of boilerplate code for certain things that in other languages would be much simpler; for example when I'm doing pattern matching in Python I have to use a sequence of if statements and complex expressions. I'd love a pattern matching macro because that would let me express my logic in a manner which more closely matches the semantics.
Macros can be abused just like decorators, classes, or functions to make complex code that's difficult to understand or modify. While macros give you a bit more rope, used properly they can be very handy.
(AST based) Macros can really be an asset in a statically typed language (like Rust or Crystal), they are crap in C since it's text substitution , but an interpreted language that already has a lot of meta programming capabilities doesn't really need macros. Python has problems, but you don't fix those with Macros on top of them. They will never be used responsibly.
1: With a better macro system C could have been a much better and secure language IMHO.
Having said that, I haven't seen metaprogramming being overused or abused much in Python.
(The nearest current Python equivalent is lambdas, but there are very awkward limits on the varieties of control structure they can contain.)
I've been studying Racket lately. Historically I've mainly worked in C and Python, although I've scratched the surface with lisps before. One of the things that strikes me about Racket is that every new special form I learn has its own evaluation rules. They may be very similar, they may follow certain patterns, but they don't attain total consistency. (Especially confusing to me has been the existence of expressions with arity other than one, which are OK in some forms but not in others.) Most of the special forms are macros underneath; you can go and read them, although they're still parenthesis soup to me at the moment. Still, there's a path through the fog -- you know that underneath it all, there is a small number of primitive forms, and all this syntactic abstraction is ultimately built on top of them.
Contrast this with Python. Python has a lot of syntax. You can't reason your way through Python by reference to a small number of primitive forms. Instead, just about every expression evaluates to an object, and if you know what kind of object it produces, you can go look up what that object does. What I really enjoy about learning Racket is seeing how syntactic abstraction lets the language get away with not creating new data structures when it doesn't have to. However, a marriage of the two approaches seems like too much. To understand what a macro does, now you're going to have to map the macro's special syntax back to Python's ordinary syntax, and then you're going to have to figure out how to use whatever kind of objects that syntax produces. This imposes a whole new layer of reasoning on the reader.
I'm not a fan of this PEP and I'd avoid using it in my code if it were adopted. I already have enough trouble explaining generator comprehensions to my colleagues; a half dozen new quasi-Python languages that compose with each other in surprising ways would be more than enough to make them retreat back to C#.
However the proposal has this in the rationale section:
> Python is both expressive and easy to learn; it is widely recognized as the easiest to learn widely-used programming language. However, it is not the most flexible. That title belongs to lisp.
If the main purpose of the proposal is to make Python "flexible" like Lisp why not go the extra mile and add lisp syntax as well (like hy)? That will solve the problem of macros and flexibility.
a = 3
if unquote(a) > 3 do
In Elixir and Julia the AST is represented by a normal type within the language itself just like Lisp, so the macro can just receive that type, manipulate it with the base language tools or using the special syntax above and return it.
So do homoiconic languages; quote/unquote/quasiquote and friends all originate in, or at least are used in, the Lisp family of homoiconic languages. (IIRC, Scheme uses more them than the CL branch of the tree, but both major branches use quote, and the ' abbreviated syntax for it.)
This should be made into an automated response.
And in general, non-lisp languages with macros like Elixir even recommend never using them until exhausting all alternatives. Macros always break expectations (if it didn't you wouldn't use them), compose poorly, hard to debug without adequate tools and requires a lot more documentation. Great for removing boilerplate though, which is my main usage.
I do think that there should be ways to break hygiene, but hygiene should be the default. I've also found that breaking hygiene is almost always unnecessary when you combine it with a feature like syntax-parameters in Racket.
Maybe it would even make sense to initially limit macros to something like syntax-case which enforces hygiene with later plans to add in more complex macros which can break hygiene.
Finally, for those who think that macros are impossible to debug, I challenge you to check out Racket's macro debugger which allows you to see each macro individually get expanded.
Add this and the next thing you know is every new library has a gratuitous badly designed DSL.
But this is Python, not Ruby.
Also, MacroPy already exists https://macropy3.readthedocs.io/en/latest/. Which is great! But let's not bless this as a standard part of Python.
Pandas, Tensorflow, numpy, etc. are all such complex APIs that they are indistinguishable from DSLs. That they happen to cooperate to a greater or lesser degree with one another and the rest of Python doesn't change that fact. Most Racket DSLs that aren't teaching languages allow for using almost all of the Racket features in addition to some new ones and today it is generally more popular in the community to create mini-languages which are basically just a set of macros and functions that work on within Racket.
Macros actually make the DSL situation better because it heavily discourages the nonsense YAML/JSON DSLs that everyone despises. Instead of thinking of the complexity macros will introduce, think of how many YAML DSLs will be killed instead!
> Pandas, Tensorflow, numpy, etc. are all such complex APIs that they are indistinguishable from DSLs.
The advantage of their current APIs, what distinguishes them from the DSLs we're talking about, is exactly that they're "just Python". You may need to consult the docs or the src to understand what a method does... but at least it's just a method, you know how to call it, how to pass args in etc. It has the same level of abstraction as the rest of your code, it is composable on that level.
IMHO Racket is an example of what to avoid. Sure it has great tools for literally building your own #lang. But even in 'vanilla' Racket code... I did a bit of Racket recently and it was constantly frustrating to find in many libraries things that look like, and could have been, just regular functions instead are syntax macros.
Suddenly you don't know how anything works. Where you think you are passing a string arg to a function... you find you can't substitute a func call returning a string, because it's not actually a "string arg to a function", it's a syntax macro that only recognises a string token in that position. Now you have to dig through the source code to understand what whimsical constructs the library author has inflicted upon you, maybe have to write your own macros to work around it at that level.
It's exactly this kind of gratuitous DSL nonsense I would not want to see in Python. It certainly turned me off Racket (and I wanted to like it - there's a lot to like!). Maybe it's part of the reason why Racket and LISPs generally are still niche languages, while Python is hugely popular.
> the nonsense YAML/JSON DSLs that everyone despises
Hey, at least it's not XML :)
There's also TOML and ProtoBufs...
I think these are all good choices, if all you need are data-structures.
If you need actual programming language features in your config that is when these become unsuitable. So rather than inventing your own half-baked DSL why not use one of the thoughtfully-designed "config languages" out there now like Dhall, Cue, Starlark. You won't have to document it or maintain it, and it probably has other useful features and just works better than what you would have made yourself.
Adding macros seems like a good way to destroy this reputation.
I am not convinced they are needed, and the examples provided leave me quite indifferent. But I haven't been enlightened by LISP yet so perhaps it's just me.
> Many domains see repeated patterns that are difficult or impossible to express as a library. Macros can allow those patterns to be expressed in a more concise and less error prone way.
On the contrary, I believe such generic multi-domain libraries would be a hot mess to maintain. These abstractions would be way too high level, and undoubtedly different fields have significant, if small, different requirements.
Used to be easy to learn but now it's just easy to get started.
What about it has got harder to learn?
To do something technical involves knowing some technical library, but I believe that was always the case.
I think you (or I) understood it backwards. They see macro as a way to avoid multi-domain mess. Each domain could develop its own macro without having to worry about other domains, while in the base language everyone has to be considered.
I have never thought, gee I wish I had macros. I have thought a hundred times more about multi line lambdas, and even so the number of situations where they would make my life easier bs being a source of confusion is very, very small.
It's an essentially finished language at this point. The library warts are not worth it given the amount of language machinery you'd have to invent.
for another example, consider
Foo = namedtuple('Foo', ['a', 'b'])
tldr: Foo needs to have
__module__ = 'yourmodule'
__qualname__ = 'yourmodule.Foo'
Having said that, is it worth the additional language complexity and opening a Pandora's box of footguns? No. The idioms are not _ideal_, but I would not consider that to be dramatically painful, not would I even say it slows down development much.
I think it's very difficult to improve idioms at this point without adding complexity in the language to the point where, IMO, it's no longer worth it. I think a strong point of Python vs other languages in the same class is that there's a well-defined limit, both as a community language and language capabilities, as to how much complexity and rope you're willing to give the user.
I'll definitely use this in conjunction with pytest, so that there are test cases and data documenting what is going on.
Less seriously: let the International Obfuscated Python Programming Parades begin!
In Python the complex machinery is in the language and the stdlib. All additional libraries that work with the language generally use stdlib constructs for data interchange, or at least base their work on it. That makes for an extremely pleasant programming experience.
Learning a different macro per dependency is going to lead to catastrophe. I know it's in an exaggeration, but I see this in macro-based languages all the time.
Type hints already made the strict typing sharks crave for blood and start trying to turn Python into Java
I think there are two issues with the implementation of macros, and I am not really qualified, but this the internet so here goes.
1. Getting access to the quoted parse tree
2. Writing code to either generate, modify, etc, that parse tree.
So folks already do (2) by spraying code into a buffer and exec or evaling it.
They do some of (1) by reading the textual src, the bytecode or the parse tree. Mentioned elsewhere in the thread are MacroPy and HyLang.
All of that said, I think a language with better semantics about deferred evaluation or a pre-eval step (maybe it doesn't need need full macro macros), who knows.
One thing that I found really powerful in Zig  and Dlang are the facilities around CTFE (compile time function evaluation)  and how they obviate solve many of the same problems.
Minor nit, I think the PEP could have done a better job showing now HOW it could be used, but what EXISTING programs could be made simpler or achieve things not possible with the current system.
I do think that some sort of lightweight code generation step could both reduce the lines of code, make things simpler for the outer layers of the stack and make things more performant. Obviously, they can go the other way, but limiting the upside because of possible downsides isn't something we should do with code. Python already has an infinite number of ways to have unmaintainable, inscrutable code, you should see my list comprehensions and my itertools code! :) What is another one if it captures more of the original essence of Python?
> In addition to an interpreter and compiler the system includes an editor with a WordStar-like command set
Wait what? I am getting flash-forwards of a ColorForth like environment for Lisp using DOS graphics. This is like some retro-futuristic timeline where Microsoft went down the Lisp+Xenix path!
 Which also mentions Terra, which I think should be required in any discussion around macros https://news.ycombinator.com/item?id=13761571
They didn't. It was 'made' by the company Soft Warehouse and sold through Microsoft.
What is the killer app to justify this?
Could it e.g. give us better matrix notation for numpy and sympy?
> It is possible to demonstrate potential language extensions using macros. For example, macros would have enabled the with statement and yield from expression to have been trialed.
But later, in the example showing an implementation of the with statement:
> The above would require handling open specially.
If the macro-based with statement implementation can only handle the built-in open function and not arbitrary context managers, it's much more limited and not really the same feature. Does this mean that macros as proposed here cannot actually be used to implement something like the with statement?
> Annotations, either decorators or PEP 3107 function annotations, have a runtime cost even if they serve only as markers for checkers or as documentation.
But this runtime cost, if the decorator returns the original function, is essentially the cost of a function call when the module is loaded. If the decorator doesn't put a wrapper around the original function, there is no runtime cost when the function is called, only when it's defined. You would need a huge number of decorated functions for this to have a a detectable effect on module loading performance, I'd expect.
> Runtime compilers, such as numba have to reconstitute the Python source, or attempt to analyze the bytecode. It would be simpler and more reliable for them to get the AST directly:
from! my.jit.library import jit
inspect.getsource(func) uses func.__code__.co_filename and func.__code__.co_firstlineno to figure out where the definition of func starts, runs that part of the file through a tokenizer, and uses heuristics to guess where the definition stops.
Sometimes it guesses wrong, particularly if lambdas are involved. Sometimes the code moved around in the meantime. Sometimes the code never even existed on disk, for example because it was entered into a REPL.
That's what they mean by "reconstitute the Python source".
OTOH, I am not keen on this opening the door to even more stuff newbies have to learn, especially in the ML domain, where you just know people will use macros to do all manner of "clever tricks".
This proposal also seems too limited to truly enable the kind of performance speedups numerical Pythonistas really want.
The ! notation feels icky and "un-python" to me. Maybe that's just (lack of) mere exposure effect.
What I want instead: improve/stabilize tools and CPython hooks for decompiling, code generation, AST manipulation, and the like. When I can go source -> AST -> source with confidence, maybe lets revisit macros.
(though the PEP wasn't really clear on this. the grammar rule they give is:
macro_expr = MACRO_NAME "(" testlist ")"
but taking inspiration from the PEP's reimplementations of python keywords, we could (maybe? kinda?) add macros for expression-oriented versions of statements, and just use those instead! really hope this gets accepted, can't wait to write some truly appalling macros >:D
also hey, at least it looks like it can handle do-notation!
x = do!(
a := foo(),
b := bar(),
pure(a + b)
a <= foo()
This seems limiting. Just the AST at that point? No more context (potentially alterable) than that?
EDIT: I have a use case in mind, a Haskell-style (kinda) where clause. For such “trailing” syntax it would be necessary to introduce a placeholder higher up the tree
Interestingly, to a large extent, it is macros that makes that approach possible.
I think it would be cool to have sql baked directly into Python, something like:
t = pd.DataFrame(mydata)
t = select new_col=col1 + col2 from t
I find it very worrisome that the PEP author talks about Python getting useless features to then go on and present another arguably useless feature. It gives a very "just one more" feel to the whole thing.
Anyone know where I can read more about this specific change? Seems very interesting for compiler / interpreter engineers.