After years of dealing with points of frustration in C++ land, I've created my own programming language. It emphasizes compile-time code generation, seamless C/C++ interoperability, and easier 3rd-party dependency integration. It's like "C in S-expressions", but offers much more than just that.
I had a hard time trimming this article down because of how excited I am about the language. Feel free to skim and read more in the sections that pique your interest.
I don't expect everyone to love it and adopt it. I do hope that some of the ideas are interesting to fellow programmers. I found it eye-opening to realize how much better my development environment could become once I opened this door.
I made the same thing a while back, and one of the neat simple things you can do is implement function-overloading a la C++. All you need is to define a way to serialize types to strings that are valid identifiers; then you (1) append the string-forms of the types of each function parameter to the name of the function at the definition site, along with a normal function that will do the dispatching in the second part, and (2) do the same thing for the type of the arguments at each call site. Et voila! Function overloading! Not quite as powerful as C++, which takes conversions and stuff into account, but it's an interesting experiment nonetheless. You can see how I did it here: https://github.com/zc1036/ivy/blob/master/src/lib/std/overlo... (DEFUN2 is the version of DEFUN in my language that supports overloading.)
I used something similar to your technique for compile-time variable destruction. The compiler doesn't know the type, so a macro generates a callback which deletes a casted version. These callbacks are named with the type so they can be lazily added and reused.
How did you implement the macro expansion? Are you translating the macros to C/C++, then compile it with C/C++ compiler and execute the temporary binary or do you have an interpreter for that?
I work on a somewhat similar project called Liz, which is basically a lisp-flavored dialect of Zig . I did not implement user-defined macros yet, planning to learn more about comptime and its limitations first. But the compiler itself uses macro-expansion to implement many features.
This is a serious exaggeration.
Common Lisp has extremely good compilers that can meet C performance.
There are plenty of Scheme implementations (I use Chez) with very good performance characteristics too.
I think Lisps tend to optimize for throughput, but games have very strict latency requirements. Garbage collection pauses could cause frame pacing issues (not that C solves that completely, but it is at least not a built in disadvantage of idiomatic use of the language)
How much did the development of Azul's C4 or Oracle's Shenandoah cost in terms of money, talent and time and how much has this work lowered the barrier for less well resourced languages? I don't get the impression that the answer to this question is: "enough that we will soon see low-latency, high-throughput garbage collectors becoming the norm".
In Erlang's, the key is that garbage collection is per process. (Note that Erlang processes are analogous to Golang green threads, not OS prcessses.)
(btw, I'm enjoying the Reactor podcast, keep it up)
Thank you! Since we don't get many comments on reactor.am, it's hard to tell if it's useful for others to share our masterminds.
Also, despite the fact that scenarios that can leverage "jit-for-free" are both a best case scenario for lisps and not that rare in different fields (from firewall rules to stencil computations) to the best of my knowledge, even in this niche lisp plays absolutely no role in practice. To be clear, I don't think this due to any inherent shortcoming of lisp itself, indeed I suspect it's mostly due to brain-drain.
¹ Frequently on par or even better than LuaJIT, though it can take some work to get it there.
Provided memory is no object.
In general it takes five times as much memory for a GC'd program to be as performant as one with explicit memory management. See: https://www.cics.umass.edu/~emery/pubs/gcvsmalloc.pdf
There's a reason why the most interesting work these days is being done in and on languages like Rust, which has no GC but still saves you 90% of the work and close to 100% of the pain of bugs that are inherent to explicit memory management.
That GC in particular has to stop the world, traverse the entire fresh heap, guesses whether something is even a pointer (and can guess wrong, causing a memory leak!), and does all this at arbitrary times, unless you do non-idiomatic things like manage your own memory in preallocated arrays, or use C/asm-powered memory mapped buffers .
Nothing comes for free, and some domains can't pay the price for GC (and choose to spend those resources elsewhere). Games, in my opinion, are largely still in that category.
I wonder what exactly makes code "idiomatic" in a GC language. It is one of the greatest misconceptions about GC languages that you shouldn't worry about allocations at all. For me, GC mostly means the correctness guarantees which come with it and the convenience, that I don't have to track every single allocation and mange it manually. However, it still means that I should be aware of all the larger allocations and avoid them as in any other language, if I am looking for performance. Not reusing allocated memory where reasonable is detrimental to the performance in any language.
And of course, the choice of the right GC approach can also help with applications like games. Lispworks even provided a Lisp environment for a Nasa probe, the GC had real-time capabilities.
Just because a language has a GC doesn't mean one needs to use it 100% of the time, which is exactly what is behind the .NET 5 performance improvements that top C++.
By making use of stack allocations, value types, memory slices, native heap allocations.
Features that Common Lisp also supports, and I bet Allegro and LispWorks are better than SBCL in the performance chapter.
ACL's GC is still the best hands down (and of course ABCL by virtue of the JVM). I think SBCL is still sporting a conservative GC on x86.
To my knowledge, SBCL switched to a precise GC some time ago. But I still would expect ACL's GC to outperform the SBCL one.
In this blanket form, the statement is just wrong. Yes, with GC you need to have a larger heap space, as unreferenced objects will remain on the heap until collected and you want to have enough heap space so collections are infrequent enough, that a lot of objects can be collected (especially with generational GC, you want low survivor rates in the youngest generation).
However, how much space you want to reserve for that depends on many factors. Usually the extra space is proportional to the allocation and deallocation rates, not the total heap size. If you have lots of data on the heap which is long-living, this doesn't count to the extra space. Which leads to the allocation behavior of your program in general. If you want best performance, your program shouldn't blindly create garbage, but only, where it is needed. A lot of data can be stack allocated, so not counting towards the GC. And of course, you can have some amount of memory manually managed (depending on language), for bulk data. Be it entirely allocated outside the GCed heap or by keeping references alive to memory that manually gets reused. In all of these cases, this doesn't really count towards the extra space calculation.
The programming language used plays a huge role in this and the paper you quoted uses Java, which is a quite allocation-happy language, so the heap pressure is higher and you need more extra space to be performant.
I will keep saying this over and over again. It's only cheap if you don't waste it. Once you are willing to waste it no amount of RAM will be good enough.
https://github.com/eudoxia0/cmacro (Written in Common Lisp, doesn't use S-exp syntax)
https://github.com/tomhrr/dale (Prev disc: https://news.ycombinator.com/item?id=14079573)
https://github.com/saman-pasha/lcc (No mention of meta-programming)
https://github.com/deplinenoise/c-amplify (No docs, no update since 2010)
Carp is still actively maintained, should be on the other list.
Just want to note that there is a large benefit to this kind of safety even if you're not writing safety-critical code: lack of bugs! The biggest benefit I've seen from rust is that entire classes of bugs, some of which can be extremely difficult to root cause and fix, are removed by design. So you spend significantly less time on the later half of the project tracking down bugs, which is more than enough to offset the productivity loss at the beginning.
So while safety and maintainability are what Rust gets marketed for, the ergonimics and just overall productivity of the language is enough to sell me on it for game dev. Languages like Zig and Jai also seem interesting in this space, but they're far from being ready to do anything in production with. The Rust ecosystem is actually ready for production now, and the language is a pleasure to work with.
It seems more intended for scripting glue code in a Rust project
You're correct that I've chosen safety and convenience over performance. I might eventually consider integrating the Cranelift code generator to try to achieve LuaJIT-like performance, but it's definitely not on the short-term roadmap.
In practice, I find that GameLisp is more than fast enough to script a busy 2D platformer, as long as you're sensible about using Rust for the more CPU-intensive parts of your game engine.
Incidentally, I'm planning to release GameLisp 0.2 within the next few days - happy to field any other questions while I'm here!
Some of the downsides mentioned can easily be taken care of by a macro I believe. Like the path traversal can be flattened with a macro most likely. Same thing for the type definitions.
const char* myString = "Blah";
(var my-string (* (const char)) "Blah")
(var my-string conststring "Blah")
Edit: The GitHub repository implies that this could be used for purposes other than games:
> The goal is a metaprogrammable, hot-reloadable, non-garbage-collected language ideal for high performance, iteratively-developed programs (especially games).
Once I get pure C output supported, it should be suitable even for embedded C environments.
Why did the creator create this, wont Carp have served the purpose.?
When I say seamless, I'm going for as close as possible, I.e. it should feel even easier to use C from Cakelisp than from C itself. The build system especially makes this possible.
Doesn't mean it doesn't perform well, currently writing a GBA game: https://github.com/TimDeve/radicorn
Cakelisp differs from Zig in that arbitrary code modification is supported, which is a step closer to the modifiable environment of Lisps. Very few languages besides lisp allow you to do things like "iterate over every function defined and change their bodies, and create a new function which calls them". Zig does not support that. It's extremely useful for some tasks (e.g. hot-reloading code modification, mentioned in the article)
Jai isn't out yet, but I was inspired by many of the comptime ideas.
Lots of things don't work in d's CTFE.
It's certainly very useful, and much better than e.g. c++ constexpr. But it's still worlds away from proper lisp metaprogramming.