Like any syntactic construct in a turing complete language constexpr presents a convenience - in this case the ability to rectify compile time generated data and runtime code paths/data. There are some practical cases I've seen where it can be a real improvement to hygiene (i.e. you can replace a compile time script/program generating data with one unified code base). e.g.
1. Perfect hash generation. Rather than using a separate gperf run you can generate your hashes during compilation of the relevant TU.
2. Format string lookup tables. For fast formatting you can push an index and positional arguments to a queue, and handle formatted string construction in another thread. Broadly speaking lookup tables in general can be generated at compile time rather than static data imported into code.
3. Complex starting conditions for physics simulations. If you have, say, some initial matrix that doesn't change between simulation runs - you can exercise existing code to generate this matrix at compile time rather than needing a separate process run or script.
These approaches are not without their downsides - particularly if the translation units in question need to be recompiled often, or if you're using whole program optimisation - but the benefit of keeping generated data in lockstep with the code consuming it can be worthwhile.
(That being said, I personally find articles like this incredibly edifying.)
5. My compiler is a rackmount somewhere; even if constexpr evaluation is somewhat slower than runtime code in a particular case, I may prefer to use the rackmount's plentiful power rather than the slim, light battery on the device where the final code will run.
Before you'd either have to hope the compiler did that for you magically, do something ugly with the preprocessor, or bake in numbers
In a way all your examples are just more extreme version of the same but I guess my point is that it's not something fringe for fancy optimizations, it's daily use.
Happy to answer any questions (including "why on earth would you do this?") :)
Why on earth would you do this?
Good question! :)
It actually started out as a learning exercise -- I didn't (and still don't) know much about ray tracing, and so I thought it would be good to study a simple example by translating it from a language I didn't know (TypeScript) to one I was more familiar with.
This involved writing a simple vec3 class, and it seemed natural to make that constexpr. Then as I went through I realised that many more functions could be made constexpr too. And then if I hacked together some (admittedly very poor) replacement maths functions, even more of it could be constexpr...
At this point, it became a challenge to see whether I could make the whole thing run at compile-time. The only tricky part was avoiding virtual functions
-- I'd originally naturally translated TypeScript "interface"s into C++ abstract classes. I ended up using two different approaches: firstly using std::variant and std::visit (the any_thing class in the source code), and secondly using a struct of function pointers (for the surfaces) -- because while virtual calls aren't allowed in constexpr code, calls via a function pointer are just fine.
In the end, I was pretty satisfied that I'd managed to push constexpr as far as I could take it. I had intended to blog about it, but never got round to it ... and now 18 months later someone has saved me the job by posting it to HN anyway :)
> Were there particular features of 17 that you were able to benefit from?
I used std::variant from C++17 for polymorphism, as a workaround for the fact that you can't have constexpr virtual functions (these have since been proposed for C++20). I also used `std::optional` when testing for intersections between a ray and a "thing", but it would have been possible to structure the code in such a way that this wasn't necessary (for example, by using a bool return value and an out parameter).
Lastly, I used constexpr lambdas (new in C++17) though again, it would have been possible to do things differently so these weren't required -- and indeed I did have to work around the fact that Clang (at the time) didn't support evaluating capturing lambdas at compile-time.
An analogy would be if we used only ball-and-socket joints everywhere in machines, even where a single-axis hinge would work just fine. And then just accepted that machines are inherently wobbly, rather than questioning whether extra degrees of freedom (whether needed or not) are always better than less.
I think sometimes what we really want when we use class inheritance is actually something like a std::variant where we don't have to know all of the constituent types in advance.
well, most of the software I use every day have some sort of plug-in interface which is done at some point by loading a .dll at runtime and getting derived instances of a base type defined in the host app.
The ARE YOU NOT ENTERTAINED of programming.
Is there some theoretical bounds or anything preventing the compilers to be faster or is it just implementation/optimization details?
(GHC tends also to be quite slow.)
Of course, the raytracer source code is the dynamic input of a compiler. The Futamura projection perspective suggests interesting options here for e.g. specializing raytracers in relation to particular scenes.
Both MSVC and Clang already support them experimentally fwiw.
It should be noted that this was done in very old D, before a lot of compile-time improvements that modern D has. I guess makes this feat more impressive for the time, while the same exercise today should be more pedestrian.
It's really nice that D was built with this sort of application in mind instead of having it added 30 years later. It makes the language feel far more cohesive and inviting for this sort of thing.
If C++ were able to discard old features, I might not be so scared of going back to it. The way it keeps growing without pruning does not make it feel very inviting.
Maybe if they started defining a 'modern c++' subset that you could limit it to via compiler flags and/or pragma's it would make life a bit easier when starting new codebases.
Obviously you can always make a personal style choice on what to use, but then you have to find one. Anyone have a favorite set of recommendations for c++ feature usage?
Also, if you start with 'X did this a long time ago.' You always end up in a LISP-hole. :P
Here it is: https://github.com/isocpp/CppCoreGuidelines/blob/master/CppC...
yes, please show us all those high performance libraries written in lisp
Lispers live in a parallel universe, where performance issues are mere distractions from the higher level thinking.
I strongly disagree. C++ has very nice tools for abstraction: higher-kinded types, dependent types (to some extent), lambdas & function objects, overloads, etc. It's a pain to design a data flow in C, but in C++ for instance it can just be something like this : https://github.com/RaftLib/RaftLib/wiki/Linking-Kernels or this : https://ericniebler.github.io/range-v3/.
Speed also isn't as much of an issue, because macros (and the functions they call) can be compiled before execution.
I agree with the utility of the approach. Given the narrow space the language can move, they actually placed the feature really well.
But C++'s variety of parentheses is larger!
But in practical use, being axiomatic can be an enormous weakness. After all those much-derided committee meetings for C++, there is a standard, and having a standard allows people to collaborate without every team having to invent their own conventions and teach those conventions to everyone they work with.
Old style C macros can be seen as an ugly way of providing lisp-like power. One of the problems with macros is people abuse them to do crazy non-standard things, and in doing so make code really opaque to others. So in C, having to invent a custom language to do something is a bug. In Lisp, it's a feature. That distinction has a lot to do with why the world is running so much C and C++ code even though a Lisp is more elegant.