Hacker News new | past | comments | ask | show | jobs | submit login
Models of Generics and Metaprogramming: Go, Rust, Swift, D and More (thume.ca)
81 points by trishume 3 months ago | hide | past | web | favorite | 6 comments



Another interesting approach is Spiral Lang, which allows arbitrary staging, sort of an extension of Zig/Terra as mentioned in the blog post.

https://github.com/mrakgr/The-Spiral-Language


His commit messages are interesting unto themselves


It's reminiscent of a man's spiral into madness.

    1:04am I have coded for thirty hours straight.
    I'll finish JIT soon.

    6:05am JITs almost done, a strange door has appeared
    near my desk. I can hear faint meowing.

    27:67am THE DOG IN YELLOW
    HAS SANG ITS MEOWING.
    OBLIVION IS THE KEY.


Looks like http://strlen.com/lobster/ is in the C++/D bucket, with unconstrained template parameters.

The article makes it sound like "errors occurring in the library code" is a major problem, but forgets to mention how much that also contributes to its strength and simplicity. in Lobster I took great care to make these type errors look very readable, like a compile-time stack trace.

Also monomorphization does not always produce code bloat. All these copied (typically small) functions tend to get inlined, and inlining has a habit of cascading, allowing parent code to be simplified and reduced in size etc. Contrast that with boxed generics that use virtual calls, which act as a barrier to optimization since we have no idea what it will do, and which of the many methods it will call. That can result in a lot of code that is present in the compiled code that is not actually needed (which would be a bigger problem for AOT languages like Go than JIT languages like Java, since "dead" JVM bytecodes don't produce code cache misses).

In a sense, monomorphization is a good match for AOT and expressive type systems, and boxing works better for more dynamic implementations and simple or no type system. Languages like Go straddle these the two extremes uncomfortably.


Is Julia an example of the idea in the last paragraph?


Not if I'm understanding it correctly. The point where Julia IR specializes on types is between `@code_lowered` and `@code_typed`: lowering does not depend on type information but type inference + inlining does. What the author is proposing would seem to be more like generating LLVM code or even machine code with holes in it which get filled in with type-specific information at the very end. I think that would be quite hard to make work without generating terribly generic, slow code in the first place. Especially in a language with as expensive dynamic lookup as Julia (which we almost never end up doing due to aggressive monomorphization).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: