Hacker News new | more | comments | ask | show | jobs | submit login
C++ : if constexpr isn't broken (brevzin.github.io)
74 points by Bl4ckb0ne 38 days ago | hide | past | web | favorite | 80 comments



"a somewhat more awkward and uglier, yet slightly more correct way... all the functionality is right there. We’re not missing anything" might as well be C++'s motto :)


"The amount of things you have to know in C++ is arguably a bit larger."

Arguably, lol.


In Bitcoin SV, there is a saying "devs gotta dev" to refer to the mass of crypto developers that add unnecessary complexity to their coins to solve minor or even nonexistant problems while not seeing the bigger (usually economic) picture. Everyone wants to make their mark on the space and there aren't enough leaders to direct the energy. C++ looks to be doing the same and not able to see the impact this will have on adoption. Oh well. I personally moved from C++ to Rust last year and haven't looked back.


I disagree - I work in high-performance computing and I think in our case the complexity of C++ is actually mirroring that of the problem domain. Nothing else gives us the level of abstraction that C++ does, while still offering convenient ways to specify low-level performance semantics.

I think for a while Fortran was pulling ahead for large-scale numerical computing, but the mess of different options for parallelism (and also I think in our case a profileration of algorithm choices that made not just polymorphism but the explicit distinction between polymorphism and templates more important for code architecture) have brought C++ back into the limelight.

I know that some competitors in our space have a Fortran codebase and they essentially had to hack together C++ Templates as part of their build process (i.e. pre-process a template file and generate multiple sources which are then compiled).


I agree that in your case C or C++ is probably the way to go (I usually favor C, because while it affords less options for abstractions it also doesn't give so many possibilities to shoot yourself in the foot in unimaginable ways (still enough mind you)).

Your parent poster is also right. Rust offers a lot of safety guarantees, a modern, well-designed language around it, package management, test and benchmarking integration, and extremely good type safety, while still mostly having zero cost abstractions for many things. If your goal is performance, control and safety, Rust is a much better choice than either C or C++. I'm still hopeful that it shall become the next big embedded language (mostly processor support missing here, but thanks to LLVM there's hope), especially since we haven't seen any other good attempt at GCless computing (no GC is pretty important for real-time systems).

But I must admit that with C++ you can probably eke out a few % more performance. The thing is though that with Rust, for 99% of projects where performance and no-GC is important, will be fast enough, especially due to safe abstractions, often faster than C++.

One example is if you have a lot of text processing to do. Often you can reuse a lot of the strings. In Rust you can safely handle this due to the borrow checker. Avoiding allocation of new strings is pretty much the biggest performance problem for most string processing.

Rust gives you a lot of safety when it comes to multithreading as well. It's perfect for creating little servers and using all your processors for performance.

These two together, and the lack of 2-3 day debugging sessions where you trace down that one tiny memory bug in C++, make Rust the biggest contender to replace C++. Of course if you only care about performance. C or C++ is probably still a slightly better choice, slightly.


Looking at some C++ extension proposals, many are definitely a case of "devs gotta dev." Fortunately most of these don't have a chance in hell of making it in.


Yeah Stroustrup put his foot down this year on whacky proposals.


It's already there, but it's really ugly and hard to read, would be my conclusion from this.


Yup.

And while the rest of the world knew how to find a file size... how long was it before C++ cleaned that very very simple corner up?

And if you look under the hood at the implementation of std::conditional.... I feel ill.

The thing I like about Alexandrescu is he invented this meta programming stuff... super super super clever magic...

...and then backed away saying it shouldn't have to be so hard, we shouldn't need magic to program.


"And if we are missing it, let's add it".


I used to be a C++ guy from the early 90s 'till about 2010. Every few months, I think "I should really learn the new C++ hotness", and then I read something like this. It's like a bunch of priests arguing about how best to fit a bunch of ugly angels on a pointless pin. It just feels so pointless.

If you must use C++, just use another language to generate the C++. All this template mumbo jumbo. Who says your meta-programming language has to be the same as your target language?

I used to spend some time looking for the perfect vector (as in linear-algebra) library. Templatized for vector dimension, data-type, blah blah blah. Dude, just use Python to generate it. Duplication doesn't violate DRY if the duplication is generated from a single higher-level source.

Doing meta-programming in C++ is a really limited way of thinking.


> fit a bunch of ugly angels on a pointless pin. It just feels so pointless.

A pointless pin would indeed be pointless.


> Who says your meta-programming language has to be the same as your target language?

I'd lay blame at IDE's and they way the take over the build system along with intellisense. They don't play well with anything generated before the compile step. Even adding a build step is and alien concept to many, so we end up with the build step shoe horned into languages.


I don't think anyone on the C++ committee is proposing banning you from generating C++ code with Python. I also doubt anyone who's interested in C++ metaprogramming has failed to consider the possibility of doing codegen via other means. In light of these facts what seems rather pointless is your comment.


I had similar thoughts when looking through the slides for Andrei's talk. A main argument in favor of static if over the mechanisms C++ offers to do the same things seems to be familiarity of if statements but having scope not be respected is such an unintuitive difference for anyone with experience in curly braces languages that it seems like a net loss in clarity to me. It looks more like using macros for conditional compilation.

Overuse of if statements is generally something of a code smell and tends to make code harder to follow. To my taste the C++ way of doing things is generally preferable as a result even if more verbose. Given this is probably somewhat a matter of taste it is hard to justify changing C++ to match the D way IMO.


Why is it every time I see a new article about C++ features my eyes glaze over? Am I just getting old, or has C++ jumped the shark?


Probably because you are used to the paralysed C++ pre C++11. Nothing happens for years and now we have new functionalities every three years supported by all major compilers. But, also, many new functionalities are not for global consumption. The audience of many meta programming functionalities are library developers for example.


Isn't every developer in a non-trivial software project a library developer? As soon as you have a common piece of functionality you want to reuse - that's a library.


I think that's dependent on language culture. Because the C++ community doesn't take language simplicity or comprehensibility seriously, there are lots of C++ developers who can't use or reason about surprisingly large parts of the language. So the community has rallied around the notion that "library" developers need to understand everything and that most developers will just glue together bits that the library devs made.

I mean, how many C++ developers actually write serious template code? How many of them could reliably explain what the keywords in post do?

The idea that every developer is a library author (or the lisp extension that every developer is a language author) is common in many other language communities but it relies on the community working hard to make mastery of the language feasible for lots of people. The C++ community never bought into that notion; they inherited a very stratified class structure from Bell Labs.


>... the C++ community doesn't take language simplicity or comprehensibility seriously...

I think this is an unreasonable assertion and not borne our by a read of the committee discussions.


To clarify: of course they like simplicity when it costs nothing. But they consistently value other goods over simplicity.

For example: maintaining backwards compatibility. The community believes that it is more important that 20 year old C++ code run unmodified than that the language should be simplified. There's lots of stuff you could do to simplify the language but options dry up in a world where 20 year old code must be able to run unmodified.

So sure, the committee talks a lot about simplicity, but it isn't willing to sacrifice much.

Don't get me wrong: I'm glad that finally, in 2020, C++ will be almost but not quite as good as Common Lisp was at metaprogramming back in 1982. But it remains the case that eval-when and defmacro are both more powerful and dramatically simpler than anything the C++ committee has ever considered.


Two other goals for C++ are 'zero-cost abstractions' and 'leaving no room for a lower level language'. It does better on both of these goals than Common Lisp and they are important reasons for its popularity (along with backwards compatibility and easy interop with C APIs).


Zero-cost abstractions only exist in a world where you don't highly value language simplicity and comprehensibility.

Simplicity and comprehensibility were things the committee had to give up in order to pretend they had "zero-cost" abstractions. Nothing in life comes free: everything, including all abstractions, comes at some cost.


> Nothing in life comes free: everything, including all abstractions, comes at some cost.

Yes. As a slogan, it is imprecise. But it's always been talking about a very specific kind of cost: runtime costs. You're 100% right about there always being some kind of cost, but the slogan doesn't disagree with you.

(Some prefer "zero-overhead principle" instead to make this a bit more clear.)


(1) I love your writing.

(2) Even reducing it to runtime costs, it seems a bit nonsensical. Are C++ exceptions a zero cost abstraction? All the googlers I argued with about them would insist that they have unacceptably high runtime costs.

OK, but templates are surely zero (runtime) cost abstractions, right? Unless you start to worry about duplicate code blowing out your instruction cache but if that's a problem, no profiler in the world will ever be able to tell you, so I guess you'll never know just how costly the abstraction is, so you might as well continue believing it is zero...?


Zero cost abstraction is not the same thing as a free lunch. It's a goal that using the abstraction will have no runtime cost relative to implementing the same or equivalent functionality manually. It goes hand in hand in C++ with the "don't pay for what you don't use" principle (again talking about runtime performance cost).

When it comes to exceptions, it's generally true on x64 that you don't pay for what you don't use (there's no performance penalty to exception handling if you don't throw) although that hasn't always been true for all platforms and implementations. It's also generally true that you couldn't implement that kind of non local flow control more efficiently yourself, although the value of that guarantee is a little questionable with exception handling.

I'd argue that no language has a really good story for error handling. It's kind of tragic that we have yet to find a good way to deal with errors as an industry IMO. The most promising possible direction I've seen is in some of the possible future extensions to C++ - it's widely recognized as an area for improvement.

Template code bloat is another case of not imposing more cost than if you implemented it yourself and you have pretty good mechanisms in C++ for managing the tradeoffs.


> When it comes to exceptions, it's generally true on x64 that you don't pay for what you don't use (there's no performance penalty to exception handling if you don't throw) although that hasn't always been true for all platforms and implementations. It's also generally true that you couldn't implement that kind of non local flow control more efficiently yourself, although the value of that guarantee is a little questionable with exception handling.

I'm inclined to agree with you but just about everyone at Google says the opposite and most C++ shops I've seen agree with them. I've made this argument and lost repeatedly. So, it seems like the community can't even agree on which abstractions are zero cost (or maybe whether some zero cost abstractions are actually zero cost?). To the extent that the community itself has no consensus about these things, maybe they're not a marketing slogan that's helpful to use.


1. Thank you!

2. The other reply is right. Zero cost compared to the best possible implementation.


Lots of things come with tradeoffs but relative to a certain set of goals and priorities there are design decisions that are net better than others. Not everything is a zero sum game.

In this context 'zero-cost abstractions' refers to zero runtime performance cost and C++ comes closer to achieving that than most other languages. It doesn't mean zero compile time cost or zero implementation complexity cost but both of those things can end up better or worse due to design decisions and quality of implementation. Given that zero cost refers to runtime performance however, the committee is not 'pretending' they have zero cost abstractions.

It is true that simplicity and comprehensibility are not C++'s highest goals / values but they are not ignored or seen as having no value. Indeed they are major topics of discussion when new features are being considered. Sometimes they are in tension with or even in direct conflict with other goals but not always.


> For example: maintaining backwards compatibility. The community believes that it is more important that 20 year old C++ code run unmodified than that the language should be simplified. There's lots of stuff you could do to simplify the language but options dry up in a world where 20 year old code must be able to run unmodified.

This pains me but every time I think "just toss XXX out, gddammit!" I think of IPv6. C++11 is still the most popular dialect of C++, even for new development I believe, and c++14 is the hot new thing to many people.

> Don't get me wrong: I'm glad that finally, in 2020, C++ will be almost but not quite as good as Common Lisp was at metaprogramming back in 1982. But it remains the case that eval-when and defmacro are both more powerful and dramatically simpler than anything the C++ committee has ever considered.

C++ is held back by having statements. If the basic structure were an expression a lot of programming, much less metaprogramming, would be simpler.


> Isn't every developer in a non-trivial software project a library developer? As soon as you have a common piece of functionality you want to reuse - that's a library.

A library, specially a C++ library, is way more than reusable code. Developing a library requires the developer to spend time making fundamental design decisions that he doesn't have to make when developing a module lost somewhere within a project tree, such as how to organize the project into interface and private source files, how the lib should be deployed, how to meet upstream dependencies, how to not break compatibility with previous releases while making your code resilient to subsequent changes, how to add metadata to your project, how to handle optional features, etc etc etc.


I understand this sentiment but think it is problematic. In fact, I think it is part of the reason a lot of libraries aren't very good.

Working in a large project, as you note, naturally leads to many conversations about sharing code and/or functionality. So you bundle something together or add an access point, call it "library X" and you are done, right? Any problems can be patched around later as you are working on a common base.

To me, this is missing > 50% of the work in designing and delivering a library for general use, which is why it often causes problems when you treat it as "done".

Which isn't to say this isn't the right thing to do in your situation! It's just that this is vastly different than what someone might be talking about in "library developer". It's not prima facie crazy to have language features mostly aimed at the latter, if it's a language often used for it. Which, for better or worse, c++ (still) is.


It wasn't my intention to say that writing a library in C++ is trivial. Just that it's inevitable in non-trivial projects. So I agree that lot's of knowledge and thought have to go into API/ABI design, versioning etc. On the other hand, only because C++ gives you lots of choices that other languages don't, it doesn't mean that only a hand full of "library developers" will have to make those choices. Almost everybody has to make them, coincidentally or consciously.


I guess there are different levels of libraries. Something like boost is a different thing from encapsulating business logic into a library.


And that's exactly what's different in D, where the audience for meta-programming is "everyone". It's not just about more power, but how accessible this power is. To think you need to suffer to have this power is not true.


D vs. C++ in this list of examples looked equivalently complex and equivalently power-user focused. static if not introducing a new scope seems like the sort of confusing, error-prone edge case that trips up newcomers, for example. We are taught very early on that in C-style languages curly braces means a scope. Except here in D in this particular case for some reason it's not that isn't clear why until you are very deep into understanding the language.

Similarly operator overloading via a string that tells you what you are overloading seems... insane? Very error-prone & complex?

Not that C++ is great here or anything, but it seems disingenuous to claim D's complex thing is for everyone while C++'s nearly equally complex thing is too complex for everyone.


> Similarly operator overloading via a string that tells you what you are overloading seems... insane? Very error-prone & complex?

Frankly you should try D for just 5 minutes and see for yourself, because no it is really sane and works well. Never seen anyone complain about this...

See here it is used to implement all operators for small vectors in 46 lines: https://github.com/d-gamedev-team/gfm/blob/master/math/gfm/m...


It appears to only be primarily useful if you are writing a pure wrapper class where you proceed to delegate to an actual implementation.

But you could still do that and not be string-based. It could (and should!) be an enum of the operator instead. opBinary takes a fixed number of ops, but the parameter type of string has an infinite number of values.

Whether or not the design of having a single operator overload method is a good idea or not is independent from what I'm specifically calling insane which is that the parameter type to that method is a string.


If it is string-based meta-programming you find ugly (as many do as first), consider the alternatives are maybe not much better in practice.

https://forum.dlang.org/post/l5srs7$2m3$1@digitalmars.com


Nothing stops "everyone" from using the features which suit library developers.


Indeed, but as a large, general-purpose system programming language there are many features that support certain important and special cases, with no application of the language, even library development, using them all.

As an example, we needed locked containers, so wrote a little template and specialized it over the couple of containers we needed. It supported just what we needed. If this same functionality were extended to the standard container library it would not only have need to be thought out to handle every non-locking case, but would have either needed a lot of repetitive boilerplate (and repetitive specializations) or else additional hair that was not worth our while to learn/use. We were able to avoid the problem by adding some documentation in the local style guide.


Except ever increasing time to learn all the language features and associated best practices.


While true on the face of it: I haven't had the need to learn many other languages so I haven't had the need to learn the best practices of them.

Use what you need, learn what you need. Don't pay for what you don't use :)


You can't in C++ because learning the language takes 10 years and is a never-ending story.


Any true language is ever evolving, even spoken languages.


Not all languages are the same, C++ is vastly more complex than many other programming languages (arguably all of them). The only reason to keep learning C++ after years of practice is the sunken cost.


On the other hand, templates get abused on D as workaround to avoid writing attribute boilerplate across all functions definitions.


I had a similar feeling. As a long-time C# programmer (how old is C#? That many years), I look at this and get really wide-eyed, "how is that at all a good idea?!"

A lot of these features seem like ways to hack around the fact that C++ templates are not generics, they are literally templates for writing copies of classes. It seems like features like this are going to make code size explode.

A lot of the examples also seem very smelly from an OOP perspective. We should one class be able to have different fields depending on template parameters? That seems like something one should do in a subclass.


I'm not sure what you mean by generics if you think that C++ templates are not a kind of generics. Do you prefer that everything happens at runtime?

Similarly, subclasses as opposed to templates carry a runtime cost. Virtual function calls are not free. Why pay that cost if you can deduce the right code from the types at compiletime? Besides, and this is probably a matter of opinion, I find code hard to follow that uses inheritance heavily.


By "generics", I presume they mean template parameters which are checked at declaration, not at use.


That sounds like concepts, as used extensively in the OP.


One significant difference with concepts is that they are optional.


Gradual typing is mostly a good thing for metaprogramming I think. Even if it was just a compromise for backwards compatibility I'd still take it over C#'s rather limited generics.


I think C# is a good language, but I also am not sure that it's a poster child for this particular language feature. (I mean this literally; I don’t have enough experience to really know how it compares to the usual suspects in this area.)


.NET's and thus C#'s generics are very well designed but they rely on the design of .NET as a whole being a managed language, so it can get away with a slightly more sophisticated runtime representation.

An example of this is that generics are "reified". This is to say that you can treat each instantiation as a unique type at runtime. This is not the same as monomorphisation though, as there are rules used to promote code sharing and avoid overly aggressive specialization. For example, boxed type parameters will generally share all code paths until the JIT decides to specialize a specific call site. Value types tend to gain a lot more from immediate inlining since autoboxing (another peculiar feature of .NET) can be avoided in more cases, so these are usually specialized upon reification (much closer to what you'd see in a template, though we're still limited to type parameters here).

In Java, the lack of user defined unboxed value types makes these distinctions less attractive and thus you see they opt for type erasure and reliance on clever JIT heuristics. They do miss out on things like type-specific static class members but I've only found a few uses for this in .NET (and even then it tended to surprise folks, main case was for a type safe structured logging system with efficient runtime control of trace points).

Having said all of that, the code bloat of C++'s templates causes issues with compile time, error message comprehension, and to some degree, cache efficiency but in return the programmer gets almost complete control over all of the trade-offs mentioned above. One can have templates duplicate code, or use abstract classes as interfaces for virtual dispatch, or even a mix where the template derives shared instances for certain types. This "we can have it all" mentality is a burden that may eventually be addressed by making the common cases easier to comprehend, debug, and compile.

I'm not necessarily going to wait for C++ to change but it's interesting to watch it make its way towards new goals while other languages mature enough to replace it in certain cases (Rust obviously but there are others like Zig which I think are worth watching).


.NET also does monomorphisation when doing AOT compilation to native code, on .NET Native or Xamarin for iOS for example.


Interesting. I've been meaning to look into the AOT compilation strategies at some point. I'll have to do some comparisons with the code I'm working with.


Thanks, this matches my understanding, but I was not confident that it was correct.


A bit of a test case for me, coming from the domain of games / 3D graphics programming, as to whether a language has a 'sufficiently powerful' generics feature set is whether it is possible to implement a generic 'short vector' type as used in any type of graphics programming. I think 'good' generics / metaprogramming support should let you meet most of these goals in a generic 'short vector' type:

- Support 'n' dimensional vectors (2, 3 and 4 are the only ones commonly used in 3D graphics and games).

- Support a choice of underlying element type (at least float, double and int but it's handy to be able to support custom types like a fixed point or rational type too).

- Support operator overloading for natural expression of things like adding two vectors.

- Be very close or identical in performance to the equivalent hand written variant for every combination of dimension / element type (excluding optimizations for particular SIMD element widths etc.)

- Not be significantly worse for debugging than the equivalent hand written code (this is as much a tooling issue as a language issue).

It's possible to meet all of these requirements in modern C++ without using any particularly exotic metaprogramming functionality. It's impossible in C#. I don't know of any other language that meets these requirements as well as C++ although I'm not really familiar with the facilities offered by e.g. Rust.


Makes sense, yeah. I think that's reasonable. You can do this in Rust (https://www.nalgebra.org/vectors_and_matrices/ as an example), but one caveat: the n-dimensionality aspect isn't as nice (see the "Type-level integers" section of that doc) until "const generics" lands; that's completely orthogonal from the "generics" vs "templates" discussion, except that it's easier to implement (from the compiler's perspective) via templates.


None of these require C++'s gradual typing- the only real reason C# can't meet them is that it lacks value parameters to generics, and perhaps some limitations to its value types.


This comment wasn't about gradual typing, it was about power / generality of generics / metaprogramming support. C# fails at this in more ways than lacking value parameters and having limitations on value types.

There is no efficient and syntactically pleasant way to work with numeric types generically for example (I can't write 'a + b' and have it work for any types that provide operator+). That doesn't require C++ compile time duck typing (you could mandate something like C++ Concepts to specify a numeric interface, I believe this is sort of what Haskell typeclasses do) but it's easy, efficient and syntactically pleasant to do in C++.

The limitations of C# generics bite it in other ways too - the members of Enumerable are useful and the syntax is ok (not as nice as F# or C++ Ranges) but C# can't match the performance of C++ Ranges given the way they are implemented. They also get bitten by the operator problem - look at how Enumerable.Sum() has implementations for every built in numeric type and doesn't out of the box support your custom Numeric type.


Fair enough, typeclasses/traits were what I had in mind here. You're right that C# doesn't have a common interface to constrain your generics with for operator+/etc.


C++ supports generic through template/macro expansion, rather than through parametric polymorphism.


I'm very familiar with C++ and C# and overall I find I like C++ a lot better. There are a few nice convenience features of C# but knowing both languages I tend to find whenever they differ I feel like C++ made the better design choice.

And C# going all in on OOP is a part of that. It's reasonably clear now that OOP was mostly a big mistake and a dead end. C++ is less infected by going all in on it as it has always been a multi paradigm language.


It's because C++ is coming up with more complex ways to solve emergent problems from C++'s own complexity. So to appreciate why a proposal exists in the first place, first you have to catch up on the context that current C++ practitioners already have.


Slides from Alexandrescu's referenced talk [1].

[1] http://erdani.com/MeetingC++2018Keynote.pdf


I agree with this blog writer. Allowing typedefs to cross static if boundaries would be a major change to the way I read code. Unless there's a more substantive example demonstrating why using the existing metaprogramming is insufficient, its benefits are imo outweighed by its cost.


Well you already have to remember that 'if' introduces a scope but '#if' doesn't..


I guess that's true. But I can't remember the last time I have used `#if` aside from header guards and optimization paths. So they do exist, but I almost never do this.


I think I just got used to C++11. What is this dark magic?


If you refer to `if constexpr`: There is no dark magic. Most of it is already possible with C++11 meta programming (i.e. with std::enable_if [0]). However, it's much more readable now (and it's more fun to write :D). I think we're on a good way that compile time programming will be more easily accessible.

[0] https://en.cppreference.com/w/cpp/types/enable_if


Most of this stuff was possible in C++03. Newer standards only make it more convenient and (arguably) saner.


Oh my. If only C++ would have used proper macros from the start...


I discussed this once with Stroustrup. He mourned that the word "macro" had been "polluted" (his word, though I completely agreed) by the preprocessor and said "templates was the best we could do within the constraints of C".

This was in the mid 90s and things are a lot better these days but still, I had better macro support writing Lisp in the 1970s.


I am ready to believe that hygienic macros would have been difficult, but just using lisp macros should have been straightforward. Tbh, I think that templates were an adhoc, amateurish design decision.


A lot of the arcanity in templates is the arcane C syntax they are shackled to. I wouldn’t call it ad HPC.

Lisp is so much easier. But current C++ is surprisingly expressive and generates pretty good production code. Instrumenting your Lisp with a lot of type declaration is pretty messy too.


There is nothing as "a proper macro". There is proper meta-programming support or a badly design language.


A macro is a metal-level abstraction. So proper macros require proper metaprogramming support. And, I would argue, the other way around.


And I would argue that a Macro is external language put on top of your language as a patch (hack), when your language is not expressive enough for genericity.

Then yes, there is nothing as "proper" Macro. There is just an attempt to fix a limited language in the first place.

And it is something that C++ arrived to solve ( not that badly ) currently.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: