Hacker News new | past | comments | ask | show | jobs | submit login
C++20 Oxymoron: Constexpr Virtual (cppstories.com)
54 points by todsacerdoti 9 days ago | hide | past | favorite | 50 comments





Not really an oxymoron: virtual calls could always be replaced by direct calls if the identity of the class object can be determined, and if the compiler can determine both the identity of the class and that value returned by the virtual function call, either by inlining or link-time optimization, the call can be replaced by the value. That was true in the 1998 version of the standard. So constexpr just adds a mechanism to help the compiler do this early evaluation, marking where it should be done, plus adding the ability to use calls in context where the grammar needs a constant, by promising that the call can be reduced to a constant.

It's only an oxymoron according to folk wisdom. The common shorthand definitions of constexpr and virtual are both too simple. People come to believe that constexpr means evaluated by the compiler, but it really means that it may be evaluated by the compiler. Similarly, programmers believe that virtual means vtable dispatch, but all it really means is the function supports dynamic dispatch. It may be de-virtualized at compile time, or there may be no overrides anywhere in the program.

If you understand the true definitions `constexpr virtual` doesn't seem contradictory.


I dunno why people downvote this comment, it seems legit and it's important to know that `inline`, `virtual`, and `constexpr` are actually just heuristics / suggestions to compilers.

> it seems legit and it's important to know that `inline`, `virtual`, and `constexpr` are actually just heuristics / suggestions to compilers.

but they are not ?

- inline relaxes ODR which generally changes the linkage of the symbol (weak linkage when inline & exported, strong when not)

- adding virtual will always make your class have a vtable and your method overridable.

- constexpr make it possible to use in e.g. template arguments:

    constexpr int foo() { return 123; }
    std::array<int, foo()> arr;
that does not mean that the compiler cannot optimize - no matter how complex your program is, if the compiler has sufficiently advanced optimization passes, it should be able to precompute everything that is possible at compile time

Technically virtual guarantees dynamic dispatch (i.e. calls through the base type will resolve to the derived function object). The dispatch can still be resolved at compile time of course.

A compiler could legally remove the vpntr if not necessary. In practice this only happens for purely local objects as current compilers do not implement the necessary interprocedural data layout transformation.


> adding virtual will always make your class have a vtable

No?

https://godbolt.org/z/qMzY1h6Gs


... but the object has a vtable in the exemple ? a call being optimized because the compiler can infer that a specific call site does not need a vtable, does not mean that the vtable isn't there taking up space in the object.

proof: https://godbolt.org/z/W9MMGvTEz

proof at runtime (to make sure that the compiler doesn't "lie" about the size, we check the stack address of the variable constructed after the one with a virtual, and after the one without ; in the first case it's 16 bytes after, in the second 1 byte): https://godbolt.org/z/s11o9do4n ; basically in the simplest indirect measure possible that we can make of the object's size, the vtable is there


It doesn't have a vtable, it has enough space for vptr, but vptr isn't even initialized. It's true that thing is larger than your trivial class.

If your class never needs a runtime virtual call because all virtual calls were monomorphisable, why would it still include a virtual table? What’s the point?

> If your class never needs a runtime virtual call because all virtual calls were monomorphisable, why would it still include a virtual table? What’s the point?

if that was not the case, it would mean that the size of the type would vary, which would break a lot of expectations.

Imagine the following:

    class obj { virtual int space_used() { return sizeof(obj); } };
    class sub1 : obj { int space_used() override { return sizeof(sub1); } };
it would be very normal for someone to call `space_used();` at some point, and use that size at another point (with a different instance), for instance for allocation ; if that size was to change because that instance was "optimized" by the compiler, that could wreak havoc.

Ah I assumed it knew about all classes before it started running compile-time code.

It's also worth noting that a vtable implementation of `virtual` methods is not required by the standard. Granted, I'm not aware of any other implementations of virtual methods (in the case where the compiler cannot optimize the virtual dispatch) in production compilers.

GCC can use whole program optimization to replace virtual dispatch with a series of if statements. There are plenty of situations where it's faster to test for a handful of types and then inline the body than it is to perform two indirections.

> GCC can use whole program optimization to replace virtual dispatch with a series of if statements.

Would you happen to have a sharable example of this occurring? I'd be curious to see what heuristics GCC is using. Also, what the conditional is on...is it still on the vtable, but just avoiding the indirect call?


Jan Hubička worked on much of the devirtualization in GCC and posted a blog about how it works here:

http://hubicka.blogspot.com/2014/01/devirtualization-in-c-pa...


wow, I never knew that, crazy

This example is trivial but compilers are generally very good at de-virtualizing at compile time even in large programs. People who bring up the supposed cost of vtable dispatch are usually either repeating bad folk wisdom, or haven't used the language since 20 years ago.

seems to me like the real problem is the burden of knowledge required to intuit how C++ compilers will or won't optimize the code you write. I know this has been an ongoing refrain for years now but this sort of thing has driven me away from using C++ personally. I wish there was a better syntactic system for denoting where you do and don't want certain optimizations to be applied.

do you also wonder where the JVM or .net runtime will inline method calls ?

no because those are explicitly "Managed" systems

Inline is probably the keyword with true definition furthest from popular belief. By the way, constexpr implies inline, for function declarations.

I think a lot of the confusion is due to a lack of clear evaluation model for runtime vs compile time code. Racket actually has a clear model for which code is available at compile vs runtime using phases. While what can be done at compile time is really broad, the message passing between phases is only done through code generation (macros). There are no surprises about what happens at compile time vs runtime. Limiting compile time code to macros enables really powerful tooling for debugging macros since you can step through how each macro gets expanded (with it being common to return more macros through the macro code itself).

"constexpr Box getBox() const noexcept override" - this signature is just ridiculous. Can't compiler at least infer "const noexcept" from "constexpr"?

In the meanwhile, how is C++ lifetime profile doing, is it useable yet?

I know C++ pretty well, but I'd pick Rust (or Zig) for my next project.


The const and constexpr keywords mean different things.

Declaring a method const tells the compiler that that method won’t change the object.

Declaring something constexpr tells the compiler that if its arguments are known at compile time then it will always return the same result and may be a candidate for “invocation” at compile time.

The fact that the keywords both contain the string “const” is a consequence of the English language.


Until C++14 constexpr on a non-static member function did actually imply const, since mutation was (logically enough) not permitted in constexpr functions. C++14 relaxed this restriction, however, by allowing constexpr functions to mutate objects which only exist within the lifetime of the constexpr expression, so constexpr no longer implies const.

Lifetime static analysis is still pretty much experimental in clang and VC++.

As for inference, maybe in the example, although in most code bases with binary libraries it would be useless anyway.


The signature reminds me of dlang somewhy, with its "pure nothrow @safe @nogc" etc.

I dream of one day in which we can eliminate the need to declare functions as constexpr. It looks so ugly to have that sprinkled everywhere.

It seems that C++ is moving in a full speed following trail-blazed by D language than Bjarne will care to admit:

https://brevzin.github.io/c++/2019/01/15/if-constexpr-isnt-b...


I wish that were true, unfortunately C++ is copying many of the superficial aspects of D and making many mistakes along the way mostly because of the bitter pissing contest that exists between the two communities and can largely be traced back to this paper:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n361...

Admittedly there are a ton of things I don't like about D and overall I find the language to be quite unprincipled and just a kitchen sink of poor mismatched features... but certainly there are some things D does exceptionally well and C++ should have learned from them to avoid about a decade worth of lost productivity.

Things like modules, ranges, template constraints, static if, compile time execution are all much cleaner in D than what C++ has and with every new C++ standard it turns out that things are broken, defect reports have to be published, there are serious quality of implementation issues and the features just don't work as well in actual production grade projects compared to the tiny and artificial examples one reads in a blog post to show off the feature.

But it's taboo to learn from D, or virtually any other language, with the mantra that "C++ is not like other languages, it's just soooo unique and special that what works for language Foo won't possibly work for C++." and so instead C++ has to reinvent everything in a half-assed manner and repeat many mistakes that others have already learned from.


I definitely agree with the general mantra that C++ is special, and bullheadedness about doing things differently.

But I do also know from experience that walking the line of evolving language features in a compiler taking into consideration prior legacy decisions is extremely difficult.

I think more than it being a pissing contest, I interpret there are practical considerations the C++ Standards Committee is considering in terms of performance, ease of implementation, and interaction with legacy features.


Hmm... Weird:

> The feature provides a single syntax with three distinct semantics, depending on the context of use.

Following immediately after

> ...fundamentally flawed, and its adoption would be a disaster for the language.

as it does, one could almost think he means that's a bad thing. And then:

> This will make programs harder to read, understand, maintain, and debug.

Yeah, so what -- isn't that what C++ is all about?


Whatever Bjarne thinks, it is only one vote per paper.

In fact, he has failed to avoid certain C++ features even though he heavily argued against them.


Nothing new. C++ has been stealing ideas left, right and center. And always without attribution.

Depends if those ideas were already present in "Design and Evolution of C++" or not, and how much haters actually care to read C++ mailing papers.


it's been neat to see the trend of languages moving toward allowing more compile-time code execution stuff like this

Which GNU C++ version will compile the sources in the article? I couldn't get the sections thing to compile with 10.3.1 even with -std=c++20 or -std=c++2a

If c++ charged by the keyword it'd be the most expensive language anywhere. Sort like 90-sylable orders at Starbucks: That'll 43.2 million please!

Do you have any better use cases for those constexpr techniques? maybe we could make simpler code to CRTP?

I never knew it's possible.

I been in the C/C++ world for more than a decade.

The seeming simplicity of C++ spec in comparison to "modern" languages hides many unfathomable corner cases, and non-obvious interactions of language features


> seeming simplicity of C++ spec in comparison to "modern" languages

This is the first time I have seen someone claim that the C++ spec seems simple. As far as I know, it is the most complex language ever designed (joke/esoteric languages notwithstanding).


The basic spec, Stroustrup's c++ is simple.

The insanity really began when attempt to further formalise, and narrowly specify the language revealed glaring gaps of the basic spec, and a decade+ period of haphazard fixes to the standard started.


But even the basic, original spec was extremely complex compared to any other language I for one know. For example, there is no other language that has as many type qualifiers as even the original C++ - volatile, static, const, extern, inline (for function types), mutable (for members), pointer (*), fixed-size array ([NUM]), reference (&), throw (for function types), virtual, friend [we could add unsigned and long, but those at least only apply to a few built-in types].

C++98 had the same value categories (lvalue and rvalue) as C, but already more complex interactions between these and various other language features, because of references if nothing else.

Then, C++ has not one but two kinds of pointers - C-style pointers and references, which each interact with other languages differently.

C++ has the most complex function overloading rules of any language. Made all the more complex by the special interaction between single-parameter constructors and function overloading. Operator overloading adds yet another layer of complexity, being slightly different than function overloading rules.

C++ has exceptions and destructors and special interactions between them. Exceptions also interact with the reference and type specifier system in complex ways. C++98 even had exception specifiers which also have interactions with the type system in general.

C++ has templates, with the myriad complex rules for their syntax (e.g. the little known second meaning of the keyword "template"), for name resolution inside template definitions, for template interactions with the linker. SFINAE is also a famously hard to understand rule, that interacts so nicely with overload resolution that it turns the compilation process into a turing-complete language of its own.

C++ has namespaces, which interact with name resolution and thus templates and overload resolution in several ways. In the same vein, it has private, public, protected, and friend, all interacting with name resolution.

C++ supports multiple inheritance, with complex rules for how objects are constructed in the presence of multiple super classes.

C++ has 3 different ways of initializing member variables, with defined interactions between them (in-place, constructor initiator list, constructor body).

C++ inherits C's pre-processor, while adding new ways for it to interact with linkage rules (because of templates, default constructors/destructors).


C# could probably rival C++ with its sheer keyword count thanks to LINQ's SQL-like syntax (that most people don't use anyway since you can just use regular methods instead), but in contrast to Stroustrup's monstrum, it's an incredibly well-designed language. Some of the core language features are built on top of foundations thoughtfully introduced many language versions earlier, all the way to C# 1.0.

Still, the cruft accumulates, and I dread the day when the language will become recognizably C++ish. Feels like things are getting out of control already - most C# programmers I've met know/use barely any of the features introduced after C# 6.0. (They're pretty good, though. The features, I mean. But the programmers also)


>C++ has 3 different ways of initializing member variables, with defined interactions between them (in-place, constructor initiator list, constructor body).

If you generalize that to variables, you get 18: https://accu.org/journals/overload/25/139/brand_2379


Just wanted to let you know: you may be suffering from stockholm syndrome.

Shouldn't that read as: The complexity of C++ spec in comparison to "modern" languages hides many unfathomable corner bases, and non-obvious interactions of language features

?


Of course the corner cases exist in all languages; what is different is that most have only one, or few implementations so the definitions of those cases are de facto “what the compiler/interpreter does”.

A lot of the C++ spec is nailing these down (or giving you a tool to choose). In that it’s a lot like IEEE 754: very simple, but then with page after page of “oh, so here’s a non-obvious consequence and how it must be handled.”


I think a lot of the complexity of C++ comes from the cases it doesn't nail down and leaves as implementation defined or worse undefined behaviour. There are several JavaScript implementations too, but one rarely needs to care about this.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: