
Models of Generics and Metaprogramming: Go, Rust, Swift, D and More - trishume
http://thume.ca/2019/07/14/a-tour-of-metaprogramming-models-for-generics/
======
posnet
Another interesting approach is Spiral Lang, which allows arbitrary staging,
sort of an extension of Zig/Terra as mentioned in the blog post.

[https://github.com/mrakgr/The-Spiral-Language](https://github.com/mrakgr/The-
Spiral-Language)

~~~
enzv
His commit messages are interesting unto themselves

~~~
Ygg2
It's reminiscent of a man's spiral into madness.

    
    
        1:04am I have coded for thirty hours straight.
        I'll finish JIT soon.
    
        6:05am JITs almost done, a strange door has appeared
        near my desk. I can hear faint meowing.
    
        27:67am THE DOG IN YELLOW
        HAS SANG ITS MEOWING.
        OBLIVION IS THE KEY.

------
Aardappel
Looks like [http://strlen.com/lobster/](http://strlen.com/lobster/) is in the
C++/D bucket, with unconstrained template parameters.

The article makes it sound like "errors occurring in the library code" is a
major problem, but forgets to mention how much that also contributes to its
strength and simplicity. in Lobster I took great care to make these type
errors look very readable, like a compile-time stack trace.

Also monomorphization does not always produce code bloat. All these copied
(typically small) functions tend to get inlined, and inlining has a habit of
cascading, allowing parent code to be simplified and reduced in size etc.
Contrast that with boxed generics that use virtual calls, which act as a
barrier to optimization since we have no idea what it will do, and which of
the many methods it will call. That can result in a lot of code that is
present in the compiled code that is not actually needed (which would be a
bigger problem for AOT languages like Go than JIT languages like Java, since
"dead" JVM bytecodes don't produce code cache misses).

In a sense, monomorphization is a good match for AOT and expressive type
systems, and boxing works better for more dynamic implementations and simple
or no type system. Languages like Go straddle these the two extremes
uncomfortably.

------
jkroso
Is Julia an example of the idea in the last paragraph?

~~~
StefanKarpinski
Not if I'm understanding it correctly. The point where Julia IR specializes on
types is between `@code_lowered` and `@code_typed`: lowering does not depend
on type information but type inference + inlining does. What the author is
proposing would seem to be more like generating LLVM code or even machine code
with holes in it which get filled in with type-specific information at the
very end. I think that would be quite hard to make work without generating
terribly generic, slow code in the first place. Especially in a language with
as expensive dynamic lookup as Julia (which we almost never end up doing due
to aggressive monomorphization).

