I was having some difficulty figuring out how Hy actually is translated to Python (and wasn't even sure if it was compiled or interpreted). Eventually I found on Wikipedia the following:
> Hy is a dialect of the Lisp programming language designed to interact with Python by translating s-expressions into Python's abstract syntax tree (AST).
I kind of wish this was made more clear on the main website. Perhaps, instead of introducing Hy as "a Lisp dialect that's embedded in Python", introduce it as "a Lisp dialect that compiles to Python's AST". The words "embedded in Python" don't make it very clear just how it's embedded into Python. The various ways you can embed a Lisp look very different and have very different tradeoffs.
For example, off the top of my head, I could "embed" a Lisp by writing an interpreter (in C if I care about performance) and letting it be called from Python, perhaps passing in a Python list instead of a string to make it more "native". Or I could "embed" a Lisp by compiling to Python bytecode. Or I could "embed" a Lisp by translating it directly to Python source code. Etc.
> Hy is a Lisp dialect that's embedded in Python. Since Hy transforms its Lisp code into Python abstract syntax tree (AST) objects, you have the whole beautiful world of Python at your fingertips, in Lisp form.
> The various ways you can embed a Lisp look very different and have very different tradeoffs.
Hy itself provides options. Typically the process is that the Hy source code becomes Python AST objects, which Python then complies and executes, but you can also translate the Python AST objects into Python source text. Or you can use Python from Hy or vice versa: https://hylang.org/hy/doc/v1.0.0/interop
The "embed" part stems from the fact that you can mix Python and Hy in a project with bi-directional calling. Works great, because it is all Python byte code in the end.
The original hy annoucement makes it clear that they embed a Lisp by compiling with Python bytecode. You can see it in the following video about the 16:25 mark
I would like to make the observation that as Hy matured over the years, instead of accumulating syntactic sugar and special cases to grow more Lispy, less Pythony, it seems to have generally gone the opposite way. That is, becoming a thinner syntactic abstraction of Python's feature set, focusing on the essentials that cannot be emulated in any other way (macros)
A few examples from recent releases:
- "match" is just native Python "match" -- it doesn't even polyfill for pre-3.10 Python versions (in the TypeScript world this would be unthinkable)
- "foo?" used to mangle to "is_foo" as a special case, but this has been removed
- "hy.eval" has been overhauled to be more like Python's "eval"
- nice-to-have but non-essential utilities ("unless") get often pushed out into the Hyrule package
For me this direction was counter-intuitive at first, but it has some very nice outcomes; for one, it simplifies the learning curve when coming over to Hy from Python, and it makes it easier to consistently interact with Python packages (arguably the main reason to use Python in the first place!)
Or maybe it's just a matter of simplyfing maintenance of the language; IIRC, "let" took like 4 attempts to get right :)
In any case, congratulations on this great milestone!
Yeah, at a certain point I realized that both the maintenance and the use of the language became much slicker if unnecessary deviations from Python were minimized. After all, when I'm writing Hy code, I'm usually spending a lot more time referring to the documentation of Python or third-party Python libraries than the documentation of Hy. I felt there were a number of ways Python could be improved upon, but e.g. the old feature that let you spell `True` as `true` in deference to Clojure was just a needless complication.
It is true that Hy really shines in those cases where it adopts an existing Python feature and adds meaningful quality-of-life improvements: anonymous functions without limitations; multiple iteration in for-loops; relaxed character set for identifiers. Things that seem completely obvious, once you have them.
It also demonstrates that elegance in a Lisp-on-Python is reached in a very different way than elegance in a stand-alone language, since it becomes an art of making the best out of what is already there.
At long last! Now I can finally clean up https://github.com/rcarmo/sushy (I've been poking at it over the years, but every time I upgraded hy portions of the syntax broke, or things would get moved in and out of the hyrule package, etc.)
By the way, Hy works really well inside https://holzschu.github.io/a-Shell_iOS on the iPad, although the syntax highlighting in vim/neovim needs to catch up to the 0.29+ releases and async.
Although I've tried using Fennel and Guile instead over the years, having access to Python libraries and ecosystem is preferable to me, and with async I can do some very nice, efficient API wrangling (doing HTTPS with fine-grained control over socket re-use and headers remains a pain in various Schemes, so I very much prefer using aiohttp)
Wow! It has come such a long way since its early, humble beginnings.
I saw the original lightning talk that introduced Hy to the world at Pycon those ages ago. Soon after I met Paul and started contributing to the early versions of Hy. I was responsible for the CL-style kwargs (you’re welcome), some minor innards, and a library or two.
Whimsy is useful, especially to keep enthusiasm up. It’s nice when hackers can be hackers and not every thing is business.
While I haven’t been involved in years it brings a smile to me face to see the project continues apace. What a great milestone!
I played around with Coconut many years ago and my impression was that the compiler was not smart enough to be useful. The generated code had a big pile of helper functions hardcoded at the top, and the program was much slower than the equivalent plain Python.
By contrast Hy generates Python code that is very close to what you might write by hand, apart from some indirection when it comes to scoping with `let` and some variations around returning values.
Maybe Coconut has improved though, it's been a long time.
1. Does it support REPL-driven development? (condition system, breakloop, etc.)
2. Is there a standalone distribution? Distributing python in itself is a hassle, ideal situation would be to simply distribute a single Hy binary that contains all dependencies within it (either statically linked or as a zip file extracted in tmp directory).
Except uv doesn’t support conda so there goes many of the niche scientific packages required for many users like me. Someone please prove me wrong because I do love uv when I can use it. I’ve found pixi to be an ok alternative but not nearly as fast.
1. I don't know what a breakloop is. Hy uses Python's exception system, which is more like a traditional exception system than Common Lisp's condition system.
A breakloop is a REPL operating in the context of condition handling. When a condition is signaled, you can use the breakloop to modify state and direct how the condition should be handled (including fixing something local and letting the current function proceed by ignoring the condition).
Seems like that would only be doable by altering CPython to at least have a hook in the initial exception processing (or maybe there is some magic double-underscore thing for that already?).
I see. That's pretty similar to the feature set of [pdb](https://docs.python.org/3/library/pdb.html). You may then logically ask "Does Hy support pdb?". The answer is "sort of". I've fixed one or two bugs, but we don't test it. I suspect there are various features of pdb that assume Python syntax and would need some hooks to get working properly with Hy.
1. It supports the same set of features that python supports, which is pretty good when it comes to things like traditional step through and postmortem debugging. And CPython Supports a lot of internal hooks if you want to do really advanced dark magic. But it doesn't have anything like the condition system or handlers/restarts.
I enjoyed the less serious part a lot. I wish more programming related projects could embrace the whimsical. That might the best way to honor the python tradition in any case :)
I eliminated a lot of whimsy from Hy and its documentation years ago because it was distracting and created noisy test failures, but I did go too far at some point, and have tried to reintroduce a little whimsy more recently.
Congratulations -- and thank you! I've been playing with Hy on and off (tried to do transformers with it, and then released https://github.com/kunalb/orphism written in hy). Time to pick it up again and take it for a spin
> Are there Python language features I can't use in Hy?
At the semantic level, no. I work to cover 100% of Python AST node types with Hy's core macros. It does take me a little bit to implement a new core macro after the CPython guys implement a new feature, but you can always use the `py` or `pys` macros to embed the Python you need, should it come to that.
> Or performance penalties in using Hy?
Compiling Hy (that is, translating it to Python AST) can be slow for large programs (I've seen it top out at about 3 seconds), but at runtime you shouldn't see a difference. Hy always produces bytecode, which can be used to skip the compilation step if the code is unchanged.
You take a little performance hit upon initial startup (from a clean filesystem, while __pycache__ folders are created). Other than that, mostly everything is the same.
I'm now figuring out how to pack images to OpenAI REST calls (using my own REST wrapper), and everything is peachy. Here's my test snippet (mostly to b64encode the file):
If I were to guess, it's to be able to use the all the packages in the Python ecosystem, directly. It's for situations in which Python is already a given. In fact, it's probably the case that many Python programmers can't even use this, due to being in a situation in which even the poor syntax is nonnegotiable.
Lack of self-contained tooling. Idle doesn't work with Hy. You'll probably need to fiddle with Emacs to set your environment first, before being able to do anything beyond playing with the language in the REPL.
Congratulations!
I once bought your eBook on Hy, and still today I regularly receive notifications about your book having been updated. Thank you for your steady contributions. I really want to use Hy in one my production apps one day.
Some concrete advantages that come from a simple, uniform, machine-readible syntax that your text editor itself can understand and manipulate:
- It makes editing and refactoring code faster. With a single keystroke you can do things like popping bits of code in or out of scope, deleting logical blocks of code etc. It's fast.
- It's hard to explain without trying it, but it is faster and less error prone to e.g. grab a section of code inside a function and break it out into a separate function. If your lisp is functional this is even smoother (hy is not as functional as it could be last time I checked).
- You never have to think about syntax. Python for example has different syntax for different operations and introduces new syntax relatively frequently. By contrast in a lisp the syntax for setting a variable looks the same as the syntax for looping and for everything else. It's all just function calls.
- If you have an nREPL set up (it's like the python repl but it's an API your editor can talk to) it makes it easier to run segments of code that are embedded inside other bits of code. E.g. you might have some complicated piece of maths or string manipulation in a function. You can run and try it out in isolation without executing the entire function.
- Metaprogramming. This is a bit overhyped for most programmers, but having the code as a data structure means you can add new language features from your own code, build DSLs, and have code that modifies other code more easily than in other languages. I try not to use metaprogramming and macros much, but I use a lot of things that smarter people than me have made with them.
These features are a bit hard to appreciate without trying them. Highly recommended!
> By contrast in a lisp the syntax for setting a variable looks the same as the syntax for looping and for everything else. It's all just function calls.
Not really.
Setting a variable in Lisp is not a function call. IF is also not a function call. Defining a function is also not a function call. Loop operations like DO, DOLIST, DOTIMES, ... are also not function calls. Lots of things are not function calls. Macro forms are also not function calls.
(let ((a 10) (b 20) c)
(declare (type (integer 0 *) a b c))
(setf c (* a b))
c)
Above is a LET expression, a variant of a lambda application.
It does not look like a function call. It looks like an operator list form, with LET as the operator. The next element is not a function call or similar, but a binding list with three variable definitions, two of them having an init value. Next to the binding list is a declaration form, with a type declaration for the local variables. Then a sequence of forms, a body, which is evaluated top down and the last value ist returned. There is a setf form, for setting a variable. The variable in the setf form is not evaluated, it will be set to the value of the second argument.
Neither LET, DECLARE, TYPE, INTEGER, or SETF are functions. They have different syntax and/or semantics from function calls.
Thus we have:
* special control flow
* a LET syntax which is not looking like a function call
* a lexical scope created by LET
* a type declaration with special syntax
* special evaluation rules, unlike evaluation of a function form
A Lisp user will need to learn that IF, WHEN, AND, ... and a lot of other operators are not functions....
I don't think (fn x y z) is all that different to fn(x, y, z). The lack of finicky operator order or other syntax footguns is nice. You're basically looking at the AST as you work. You're one fewer layer of abstraction removed from the logic you are composing.
In real world Lisp, alignment conventions are used that make even a fairly nested function readable at a glance. You'd also generally work using something like paredit, so you're kind of shuffling the S-expressions around like legos. It's not a language that you'd want to write in something like Notepad.
The most important thing about the syntax, though, is that since it's basically the AST, a Lisp macro can effectively manipulate the AST directly and on the fly. This is incredibly powerful, and would be hard to achieve in an Algolian language like Python.
Because we're so used to thinking parenthesis provides an order of evaluation precedence, the fact that 'fn' is within the parenthesis is very confusing. even (fn(x y z)) would have been better. having the function name and it's arguments just next to each other with no syntactic separation is hard to follow. it's like doing arithmetic this way: "add x y z" , is it x+y = z or x=y+z? I'm sure I can get over this hurdle though.
I hear you. Try to shift your thinking from statements to expressions, it will make things easier.
The placement of parentheses is not arbitrary. The parentheses enclose expressions, and the nesting of expressions builds a tree. This tree, in effect, is your program's AST. Your C program compiles to something very similar; Lisp just makes it explicit.
"Add x y z" is exactly what it sounds like, if I said those words to you. The add function is usually +, so (+ x y z) is an expression that adds x, y and z, and the whole expression evaluates to the result. You can nest expressions however you like, so, for example, (+ x (* a b)) adds x to the result of the expression (* a b), which evaluates to the result of multiplying a by b.
Values and expressions are effectively interchangeable. Any one of +, x, or y, could be replaced by an arbitrarily complex expression (yes, the function too - you could replace it with a call to a higher order function, which would return a function that is then called over x and y).
The neat thing is, that's basically[0] all the syntax Lisp has. There are no reserved keywords. Everything works as per above, and can be redefined at will. You can change the language's if and else to work any way you want. You can redefine Lisp's reader to change the syntax of the language to absolutely anything you can imagine, and doing so is quite easy, since you're effectively already working at the AST level.
And you can do all of this in a running REPL without even needing to restart it.
If you're interested, check out Common Lisp: A Gentle Introduction to Symbolic Computation by David S. Touretzky.
[0] Macros are the other piece of the puzzle, but to understand them you would need to read up on Lisp more. Once you understand Lisp macros, the advantages of Lisp's syntax become obvious.
Wow, thanks very much for the detailed reply. I'm a bit excited and interested to read up on it more. I can only imagine, it must have inspired so many enthusiast level and esoteric languages.
Code thats written in Lisp is using AST differently. It makes the process of generating machine code much easier. This in turn enables macros which is meta programming not available in non Lisp languages. However on the other hand I tried this avenue and since most modern computing is not Lisp based it severely limits its potential. I'm hoping for a Rust based Clojure or variant. Clojure has the problem its based on the java ecosystem which has severe downsides. A lisp thats based on python doesnt make much sense to me personally python isnt a good language to write other languages in. I think Zig and Rust would be the interesting choices. One attempt: https://github.com/clojure-rs/ClojureRS
Wouldn't it make more sense then to compile existing languages to a Lisp? From what you said, it sounds like the goal of Lisp making generation of machine code faster/easier? Or is it that forcing programmers to encode there intent into a Lisp removes guessing and optimization overhead for the compiler?
You can invent another syntax with Lisp/Scheme macros if you want. When compiled or interpreted it will be macro-expanded, and then likely transpiled to an AST and then compiled into byte- or machine code.
Take a look at Racket languages for some examples.
Lisp syntax with the parens and so on means editing is inherently structural, which makes it relatively easy to reason about and restructure the code. In Python spaces have double meanings, both as separator between tokens and as a block separator, similar to e.g. {} or () in other languages. That makes structural editing relatively hard.
As I understand, that's pretty much exactly how WASM works. It can output either a `.wasm` binary or the same code in a `.wat` text format that looks like this:
yes, you can think of Lisp almost as an intermediate language. Lisp probably lends itself well to machine code generation but I haven't done enough assembly to really know that. its not designed for that, its just a side effect of the language primitives being very very short. you can write a basic Lisp interpreter in a few hours yourself https://norvig.com/lispy.html. Creating a decent compiled language takes a lot longer than that. Lisp only requires 5 or so primitives and it doesn't have a grammar.
it is a bit ackward for humans but machines can process it better because it has less structure. for example what I thought is that Lisp could potentially be a great choice to interop with Large Language Models with, because its potentially shorter code. Good clojure code can be 5-10x shorter than python code. With LLMs size of code matters a lot.
Compared to other languages, 'lisp syntax' is very minimal. It is just a prefix notation with parenthesis for enclosing expressions, the first item usually being a function. There are only a handful of special forms to learn, which deviate from this.
The real power of lisp IMHO lies in:
- Repl driven, dynamic development. This is hard to explain. Its like chocolate. You have to try it. You either love it or hate it.
- Macros. This is again enabled by the 'lisp syntax. Actually lack of it...'.
Here is an example I recently ran into when checking out Hy
By analogy, programmers like LISP over other syntax for the same reason that creative children like LEGO over other toys. It's not that the pieces in the box are more beautiful than any other individual example of molded plastic, but because they are purpose-built to be the maximizing mold such that a box full of them gives more flexibility and potential than a box of any other shape you might choose. Lisp syntax is the way it is to create a human-machine interface with as much similarity between the two sides as possible, so the human approaches machine power when you write code, and the machine approaches human reasonability when you inspect running code.
For examples, McCarthy's original purpose was to demonstrate the effectiveness of a symbolic differentiation process he had dreamt up, so he devised the syntax and meta-circular evaluator of lisp to make it maximally obvious from the program text that the differentiation system was mathematically correct, while keeping it maximally obvious from the program model definition that it was computationally concrete. In response to new trends in the programming field, Lispers write mind-bending books like "Let over Lambda", "The Art of the Metaobject Protocol", or "Software Design for Flexibility" to show that, when your syntax and model is right, you can radically change how you solve problems not by rewriting your spec or switching languages but by just adding more lisp to the lisp you already have, which has the same simplicity as radically increasing the sculptures a child can make by just adding more lego to the lego they already have.
Lisps, on the other hand, tend to add features as just more convenient versions of things they can already do: Macrology for self-adapting code? Just lisp functions on lisp data structures corresponding to lisp functions. Actors for a concurrent execution model? Lisp functions as lisp data parameterized by higher-order lisp functions. Composable continuations for error handling? A lisp function exploring a lisp data structure of lisp data structures of lisp functions. It's turtles all the way down. Paul Graham points out that you can understand the social hype the presence or absence of a feature like operator overloading as a consequence of friction-ful syntaxes, while lispers care much less because replacing a function you don't prefer with one you do for your use case is straightforward in a friction-free syntax. When he decided to build a reddit clone for tech entrepreneurs he didn't need an outside data system just to get started, he only had to spin up a pool of threads for sessions to directly modify s-expression literals in memory, which he could save or modify by printing straight to disk and load by just reading the lisp syntax back into memory like all lisp code is, with no execution intermediary like languages such as the Pythons tend to have complicating things enough to make comparatively big services like a whole database for a private gossip forum worth the effort. The syntax doesn't make lisp first-order beautiful, it makes lisp the hacker's local maximum, which is second-order beautiful, and honestly isn't much harder to get into the habit of reading once you know it's worth it.
Exactly. When certain smug people come about I just humor them. Like, "oh isn't that nice", when I'm really holding my nose internally. Like who dumped a bunch of toenail clippings in your code? When I see Lisp my reaction is like when my dog makes a mess on my carpet. And macros? You get paid to write code. Is it too much to write a few more lines? Python's nice and all, but Algol, that's a rugged person's language, feels very solid. Not like this squishy Lisp. Like how many parens do I have to type?? Please.
Years ago, under the influence of Lisp romanticism late into my university years, I worked on a domain-specific language for designing and analyzing control systems as my senior design project, using Hy! Just checked, it's been five and a half years to be specific. Really, time flies.
Hy-pothetically, yes, you could take Hy code in and spit Python code out via `hy2py`. I think at one point I considered supporting this officially, but then decided there was really no advantage.
That's how I'm using Hy at my job—I write Hy then hy2py it into Python, lightly polish the compiled Python for human consumption, and then share that with my Python-fluent but Lisp-illiterate coworkers.
Yay! The birth of a language is a beautiful thing.
I’m curious about the macros: how are these implemented? They seem like pretty straightforward unhygienic Lisp macros, which is a little bit of a disappointment, but better some macros than none at all! Anything about the macro system that distinguishes it from the Common Lisp system? E.g. anything borrowed from Scheme or Racket? Docs are sparse here.
It’s far from new. In 2012 I worked for a shop who used an internal package named “hy”, and the introduction of this Hy made our builds break in a novel and interesting way.
(Also, use something to insure your own internal packages have a higher priority, alright? That’s a lesson I didn’t need to learn twice.)
Yes, and it's a very nice tutorial! I'm interested in implementation details. Maybe there's no hygiene (and no scope sets etc.) to worry about—that would probably make documentation a little shorter. I'm sure the documentation will grow as people run into edge cases.
(I'm also probably a little spoiled with documentation coming from Racket which has like 4 big chapters dedicated to different aspects of macros scattered around the docs, plus some associated papers. Forgive me—I'm not trying to dunk on Hy; I just like reading docs.)
Admittedly, I've tried not to document the implementation. Yeah, they're pretty much simple dirty Common Lisp macros. Internally, they're functions that are called with the arguments converted to models (via `hy.as-model`), and then the return value is converted to a model. If a macro's first parameter is named `_hy_compiler`, it gets access to the current compiler object; this is undocumented since it's only meant for internal use. Reader macros have no parameters, but can access the current reader object as `&reader`. When it's defined, a reader macro is added to the current reader's dispatch table.
I have a hard time trusting the CL macros I write because of unexpected interactions with the context that I use them in. While it is the case that CL macros are more powerful than the R6RS macros-by-example system, Racket’s system (and some other newer languages that have adopted things pioneered by Scheme and Racket, such as Elixir) give you hygienic macros without sacrificing expressive power.
I want my macros to be easy to write correctly. That can only happen when the system has proper hygiene.
Table saws were only improved by the addition of an emergency stop to prevent people from maiming themselves. Power tools don’t have to be dangerous.
If you are wanting to introduce variable capture you better be really explicit about when you want it.
If there’s no hygiene, I have to know everything about how the macro is implemented in order to trust it and use it confidently. Might be fine for small shorthand, but that won’t scale. You need non-leaky abstractions to build on them.
Racket’s `syntax-parse` and “syntax parameters” show that you can have it both ways: procedural macros that are hygienic by default, but with an explicit escape hatch when you do want to introduce new bindings into the macro call site. It also gives you much much better errors.
CL macros are about as dangerous as malloc/free, but without years of experience and tools like Valgrind to debug. They’re hard to trust and get right.
Racket macros are like GC/affine typing: everything is correct by construction.
I've used table saws, with and without protection. They're all dangerous as fck, because removing the chance of getting hurt means converting it to a completely different kind of tool. Chain saws, same thing. Power tools.
Of course we want them to be as safe as possible, but that's a different discussion. All attempts I've seen so far have dropped functionality to get there.
> All attempts I've seen so far have dropped functionality to get there.
Well, then I recommend you take a look at Racket's macro system: Racket gives you hygienic macros without any loss of power. (It's actually more powerful and expressive than CL macros.)
The way you phrased that suggests you're only familiar with CL-style macros, where arguments to macros are nested lists of symbols a function or variable is known by it's name (a symbol) and nothing more.
Racket's model is much more sophisticated and powerful. The input to a macro in Racket is a syntax object [1], which combines the CL-like quoted expression with additional source and lexical binding information. This means that in Racket, unlike CL, a variable is not just it's name—it's also all this other information. Racket uses scope sets to track binding information in a sane and hygienic manner across different macros and functions.
So, if you want to introduce an identifier that the macro caller can interact with (note: I said an identifier—you can introduce any symbols you want but they'll be different identifiers because their scope sets will be different) you need to explicitly state that you would like to create an identifier with a particular scope set. [4]
But that's the old, dumpy, clunky way of doing things. Thanks to recent research, we have much better ways of introducing identifiers in a sane, hygienic way. Gregg Hendershott's excellent "Fear of Macros" walks through making the `aif` macro using syntax parameters [3] which let you cleanly introduce new bindings. (See the paper "Keeping it Clean with Syntax Parameters" he's linked to in his post.)
So, in short, no, we're not back to regular CL macros because Racket prevents us from accidental variable capture but gives us an easy way to do 99.999% of the use cases for breaking hygiene (syntax parameters) and then one more way (`datum->syntax`) just in case we really need to do something out of the ordinary. In either way, Racket lets you express your intent with macros better and more precisely than CL.
Well, this is a little embarrassing: Clojure was one of the biggest influences on Hy in its youth, but that was mostly before I got involved in 2016. I never actually learned Clojure. So hopefully somebody who knows both Hy and Clojure well can answer. I can tell you that at run-time, Hy is essentially Python code, so Hy is more tightly coupled to Python than Clojure is to Java; a better analogy is CoffeeScript's relationship with JavaScript.
I get the impression that Clojure tries to convince the programmer to avoid side-effects a lot more strenuously than Hy does, but it's still not a purely functional language, so I don't know how consequential that is in practice.
Clojure has a good collection library with immutable/persistent data structures, but as a language it allows side effects and has some mechanisms to manage them. It is also possible to call any Java method from Clojure.
Clojure does not work with Java ASTs, it translates into JVM bytecode directly.
I haven't used Hy, but I am the maintainer of a Basilisp which also compiles to Python and aims for reasonably close compatibility with Clojure if you're interested.
Wondering how custom immutable data structures fit in with the Python ecosystem.
Particularly, I know that NumPy arrays and Pandas Series/DataFrames are the popular data structures used in research computing in Python (for Statistics, Data Science, Machine Learning etc.). These data structures afaik are mutable, however (for performance reasons), so at least the aspect of immutability from Clojure cannot be easily integrated with the Python ecosystem.
This project is much younger and used by many fewer people than Hy, so I couldn't really speak to this besides my own opinions. The few who have started using it and contributing seem to just be using it as a way to write Clojure while interacting with popular Python libraries and tools. Kind of the same way that interacting with the Java ecosystem is often more pleasant from Clojure (IMO) than in Java itself.
I've tried to facilitate strong Python interoperability despite the variety of otherwise incompatible features of each language. It's trivial to work with immutable data structures using Clojure idioms and then convert them to Python data structures (as needed) at the boundaries, but the immutable data structures used by Basilisp are also generally compatible with Python's core (read-only) interfaces so that conversion may also not be necessary if you aren't expecting the called function to perform any mutations.
This reminds me of Berkeley's CS61A when it was taught with Scheme. One of the projects was writing a schema interpreter for scheme. It felt silly, but was a great small project to show case recursion, trees, and blurring the distinct between data and code.
Yes, such as: metaprogramming via macros and reader macros; arbitrary compile-time computation; removal of restrictions on mixing statements and expressions; and other arities for Python's binary operators. See http://hylang.org/hy/doc/v1.0.0/whyhy#hy-versus-python
Dynamically shadowing global variables is not built-in, but easy to write a macro for if you want it. See e.g. https://stackoverflow.com/a/71618732
Super happy Hy 1.0 has been released! It was the first proper open-source project I contributed towards and I don't think I would have been as engaged as I am in the community without it.
I looked the examples page, but it was a little disappointing. Every example was something that was easier (and sometimes shorter) in Python.
It would be awesome if there were an example of something that can't be done in Python because it takes advantage of lisp's "functions are first class".
You can add all the same type annotations as in Python, but from what I've seen, type-checkers expect Python source text and don't just use standard Python introspection, so you'll need to use `hy2py` first to actually check your program's types.
I'm not sure. I was going to say that Mojo is proprietary software and so I've never tried it, but I just checked and apparently it's free now. If nothing else, you can probably get a lot of Hy code to run on Mojo via `hy2py`, if Mojo supports a lot of Python as it claims to.
Edit: actually, confusingly, the GitHub repository for Mojo doesn't have an interpreter. The language is still proprietary.
Also, looking at the code on Github suggests this compiler is written in Python (see https://github.com/hylang/hy/blob/master/hy/compiler.py).
I kind of wish this was made more clear on the main website. Perhaps, instead of introducing Hy as "a Lisp dialect that's embedded in Python", introduce it as "a Lisp dialect that compiles to Python's AST". The words "embedded in Python" don't make it very clear just how it's embedded into Python. The various ways you can embed a Lisp look very different and have very different tradeoffs.
For example, off the top of my head, I could "embed" a Lisp by writing an interpreter (in C if I care about performance) and letting it be called from Python, perhaps passing in a Python list instead of a string to make it more "native". Or I could "embed" a Lisp by compiling to Python bytecode. Or I could "embed" a Lisp by translating it directly to Python source code. Etc.
Regardless, interesting project!