
Zero-Overhead Metaprogramming - rbcoffee
http://stefan-marr.de/2015/04/zero-overhead-metaprogramming/
======
PaulHoule
We've been doing it a long time in Java.

It is very practical for a program to write a Java class, compile it, then
load the classfile, thus the metaprogrammed object runs as fast as anything in
Java.

~~~
cbsmith
Agreed... though it can be a bit of a pain to code that way.

------
ksec
I care when i am going to see this in JRuby. And hopefully in MRI as well.

~~~
smarr
It is already used in the JRuby+Truffle backend. See for instance:
[http://www.chrisseaton.com/rubytruffle/pushing-
pixels/](http://www.chrisseaton.com/rubytruffle/pushing-pixels/)

------
Dewie3
> Metaprogramming and reflection are slow. That’s a common wisdom. [...], or
> really any metaprogramming abstraction in modern languages unfortunately
> comes at a price.

Hold on. That might well be true for dynamic reflection. But _metaprogramming_
is a wide term. It seems simple enough to give an example of metaprogramming
that comes at no runtime cost: compile-time metaprogramming.

Kiselyov has a catchy term for one approach to compile-time metaprogramming:
"Abstraction without guilt".

[http://okmij.org/ftp/meta-programming/tutorial/](http://okmij.org/ftp/meta-
programming/tutorial/)

~~~
bazzargh
The paper discusses compile-time metaprogramming (and its disadvantages
relative to this approach) in section 6. _" Unfortunately, to enable the
optimization of reflective operations, the MOP needs to be severely restricted
and for instance metaobjects cannot change at runtime...Furthermore, most
incarnations are not as powerful as MOPs in that they cannot redefine the
language’s semantics."_

~~~
cbsmith
> Furthermore, most incarnations are not as powerful as MOPs in that they
> cannot redefine the language’s semantics.

Yeah, but there is a reason for that... It turns out to be less useful than
you think, and altering language semantics in a meaningful way hampers the
ability for an optimizer to make code execute efficiently (without the
language's semantics, you really can't make assumptions).

> MOP needs to be severely restricted and for instance metaobjects cannot
> change at runtime..

True as far as it goes, but to the extent that they can change, the
performance penalty is there _because_ you can't resolve issues at compile
time. There are a variety of tricks employed to get you a hybrid model, where
you effectively have a JIT optimize after dynamic binding, but if you truly
have mutating metaobjects, there is an undeniable cost that you can't get
around (and the benefits really aren't that huge).

~~~
smarr
With generalized polymorphic inline caches (dispatch chains) you get that cost
down to the dynamic check and the JIT compiler can remove all reflective
overhead.

So, in the end, you get a powerful MOP without reflective overhead.

~~~
cbsmith
> With generalized polymorphic inline caches (dispatch chains) you get that
> cost down to the dynamic check and the JIT compiler can remove all
> reflective overhead.

...which results in a scenario no different in terms of overhead or
expressiveness, than you'd have with a static MOP with runtime dispatch.

~~~
smarr
I don't follow that logic. First, I don't know what you mean with static. That
you can't change the metaobject associated with a baselevel object? That's a
loss of expressiveness.

And, what do you mean with runtime dispatch? Static and runtime dispatch
combined? I am not sure I really know what you got in mind.

With dispatch chains, the runtime dispatch is resolved at runtime and the JIT
compiler can inline through it. But, you need that runtime information. A
static compiler, like for OpenC++ can't usually do that.

~~~
cbsmith
A simple example would be using templated classes and functions, all of which
are navigated to from a virtual base class.

