Hacker News new | past | comments | ask | show | jobs | submit login
C++17’s useful features for embedded systems (memfault.com)
218 points by manchoz 11 months ago | hide | past | favorite | 191 comments



I added 0b binary literals to C++ back in the 1980s.

https://www.digitalmars.com/ctg/ctgLanguageImplementation.ht...


I really appreciate the single quote allowing for alignment:

case 0b0001'0000:

case 0b0101'1000:

(edit, fixed formatting, but I also wanted to agree with a lower down comment asking for 0b001_0000 instead; a better choice.)


As you say, in D it's:

    case 0b0001_0000:

    case 0b0101_1000:
which indeed looks much nicer. I have no idea why C++, when copying the feature, didn't use _.

P.S. I copied this feature from 1983 Ada. AFAIK, it was a completely forgotten feature until D had it, then other languages started copying D.


I had some fun recently using (abusing?) `constexpr` to process string literals at compile time to ‘compress’ then in the binary and save a few bytes in my microcontroller.

https://gist.github.com/th-in-gs/7f2104440aa02dd36264ed6bc38...

I’m just shaving some bits off - but I guess in principle you could do anything that’s `constexpr` evaluatable. Gzip compression of static buffers?…

Godbolt example:

https://godbolt.org/z/qc7jhKoGc


This is the intended purpose of the feature. gzip may be a little aggressive given that you have to unzip it again at the other end but its very possible (and potentially not even as expensive as one might expect given that these types of compression algorithms can be tuned on a speed/size tradeoff and backoff when struggling to compress).

https://github.com/PhilippeSigaud/Pegged is a D library that generates a parser generator for you based on a grammar (string) at compile time.


I think a lot hangs on what the data is ultimately for— if you have twenty compressed blobs and you only need one of them at a time, then it's perfect, or maybe if you have a single large blob that you just need targeted random access to.

But if you're going to uncompress it all right at startup, than it's not worth it at all— microcontroller flash is much cheaper and more plentiful than RAM.


The trick is to not even have strings in the binary. Take a look at trice, defmt and logging in ESP-IDF.


It's a little insane they don't have run length encoding for statically initialized data.


Why

    uint8_t b = 0b1111'1111;

I would rather have

    uint8_t b = 0b1111_1111;

This ' thing is hard to get right on some non-us keyboards. And yes, I've the same problem with Rust.


It could be misinterpreted as a user literal [1], for example with 0xAAAA_BBBB it is unclear whether _BBBB is a user literal. The original proposal discusses some of the alternatives [2].

[1] https://en.cppreference.com/w/cpp/language/user_literal

[2] https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n34...


Didn't know about this language feature. Seems neat. The ASM integration is also neat. If you compare that to C...

Still, I have rarely seen C++ compilers for embedded systems. Although the latter definition more and more includes PC hardware.


> Still, I have rarely seen C++ compilers for embedded systems.

What kind of embedded systems? Even AVR has a C++ compiler.


Yeah, that was wrong. I meant there are very few serious projects implemented in C++ for µC that are restricted memory wise.

Yes, AVR has a C++ compiler, but I could imagine it doesn't support all language features. If it does it would still be discouraged to use dynamic memory allocation, which is pretty essential to leverage many advantages of object orientated languages.

Of course embedded systems aren't restricted to that anymore and you are perfectly fine to use C++ for the average ARM system that isn't as restricted memory wise.


Still, C++ is very useful on embedded systems. This article is just about that.


These are all valid Rust:

  0b1111_1111
  0b11111111u8
  0b1111_1111_u8
  0b11_11_11_11_u8
https://doc.rust-lang.org/book/ch03-02-data-types.html


Their problem with rust is probably more all the single quotes you need for lifetimes


I think you can't do that, because the underscore may start a user defined literal suffix.


D doesn't have a special syntax for user-defined literals, which avoids this problem completely. One can use templates for user-defined literals, such as:

    km!1000
for 1000 kilometers. Here, km is a template that takes an integer argument:

    struct Kilometer { int v; }

    template km(int V) { Kilometer km = Kilometer(V); }


Are you sure? ' easily accessible on every layout I've looked it. ´ and ` are usually a pain though. There's probably some layout where it's a pain, but it can't be that many.

But oh how I hate `. I had to edit my layout to make it typable for stupid haskell infix functions


> I would rather have > uint8_t b = 0b1111_1111;

In D, you would have:

    ubyte b = 0b1111_1111;


In-line class static variables ... finally.

if constexpr is neat (along with a bunch of the other constexpr/compile-time features that have been coming along), but I feel like this will have both... good and bad uses, and I fear for the astronauts who will go crazy with this.

The enhanced conditionals, this I kind of like though it would take some while to get used to... kind of surprised this got in, being such a departure from C.

Small thing: hardware_destructive_interference_size is nice. Wish I had this in Rust.

Looks like it was asked for (https://github.com/rust-lang/rfcs/issues/1756) but went nowhere.


> kind of surprised this got in, being such a departure from C.

While completely gratuitous incompatibilities with C are not welcome, C compatibility in general was abandoned in new features long ago (consider range based `for` or the venerable `nullptr`).


Luckily (?) nullptr is in C23 so yay!


So kind of the C committee to help C++ reduce its incompatibility with standard C :-)


I have a strong dislike for the enhanced conditionals. Like, the feature itself is fine... but there's just a certain subset of people who think any new C++ feature is the blessed "proper" way to accomplish any task and this feature is very abusable. It also doesn't provide very much benefit to actual practice in return.


> I have a strong dislike for the enhanced conditionals.

So don't use them. C++ is a multi-paradigmatic language, you don't have to (and typically can't, really) put all features to use.

I'm not such a great fan of them either, but it's not like they would confuse me if I saw them in somebody else's code.

https://www.youtube.com/watch?v=cWSh4ZxAr7E&list=TLPQMjkwNTI...


For some additional context, this came up for me because some other team had snuck verbiage into our style guide that mandated things like this instead of traditional initializers:

    void foo() {
      if (auto [a, b, c] = std::make_tuple(x, y, z); true) { /* [...] */ } 
    }
Yeah you can still read this, but it's stupid to support constructs creating scope, doing control flow, and initializing arbitrarily many variables simultaneously (which may invoke constructors of their own). The relatively minor benefits to things like iterators are not outweighed by the burden of supporting this stupid code.


If the point is to limit the scope of specific variables, wouldn't it be simpler to write

    void foo() {
        {
            auto [a, b, c] = std::make_tuple(x, y, z);
            /* [...] */
        } 
    }


I think the original comment's point is exactly that. But people see the new expression type and want to apply it in this case.

I'm of two minds on this. I can see the impulse and why you'd reach for it: Block lexical scopes with no if/while/etc statement attached read a bit odd. Introducing an "unattached" block means as a reader/reviewer I want to know why the scope has been created. So in this new syntax I suppose makes it "clearer" (in some respects) that what is being done here is introducing a new lexical scope specifically for the given variables. Like I commented elsewhere, this is somewhat similar to the ML-languages "let <assignments> in <block>" syntax, which I have always found admirable, as it makes clear to the reader (and compiler) what scope and state are being dealt with.

On the other hand, this is so out of step with C/C++ style generally, and it seems so excessively "clever" that I think it's going to piss people off. And because it's bolted onto the conditional expr, you get the pointless ;true there.

Having a with ( ... ) syntax would have been nicer?

  with (auto [a, b, c] = std::make_tuple(x, y, z)) {
  }
I'm curious what Titus Winters and the Google C++ style guide is saying about this.


I'm actually not curious about what Google's guides say, and don't accept them as technical leaders.

Having said that: It's not really unlike C++, since you have it in for loops:

    for (int x = 0; x < n; x++) { do_stuff(); }
which is like

    {
        int x;
        for (x = 0; x < n; x++) { do_stuff(); }
    }

anyway, I wouldn't mind the syntactic sugar of "with X=Y do Z" or "let X=Y in Z"


Having the compiler deduce the return type of a function depending on the constexpr path taken is really useful.


Different feature, I'm talking about the optional initialization section added to conditionals. It allows moving declarations to the point of use like for statements. I've also met people who think it's now the appropriate place to declare all block scope variables, which is horrifying.


It's horrifying but maybe also intriguing?

Has shades of the OCaml/StandardML "let ... " scope:

StandardML:

  let
    val a = 1
    val b = 2
  in
    a + b
  end
It's nice for making it clear that the scope of the assignment is restricted to this lexical scope, though there are other ways to do this in C/C++.

I work in Rust these days so haven't had a chance to use much of this, but I wonder if this has some nice RAII type use patterns? I may now go explore.


Isn’t polymorphic memory allocator the most significant recent C++ addition for embedded systems that allows preventing runtime memory allocation after the init phase and thus allows conformity to MISRA and other guidelines for critical SW development (esp. when using stdlib)?


You could always run your stl with your favorite allocator. PMR just makes it simpler to use, and provides library implementations of common allocator patterns. PMR is not a zero-cost abstraction, though. Its implementation via type erasure has performance costs.


> Isn’t polymorphic memory allocator the most significant recent C++ addition for embedded systems

I would say no. Run-time polymorphism is overrated IMHO, and more so for embedded systems, again IMHO. C++ in general has been moving towards preferring things happening statically rather than dynamically.

> that allows preventing runtime memory allocation after the init phase

That's not what allows preventing runtime memory allocation after an "init phase". Unless I'm misunderstanding what you mean.

... oh, I think I get it: As a general rule, avoid using standard library data structures with allocators. They may work fine as boilerplate, but are usually not what you want when you have any non-trivial requirements. std::vectors are fine if you don't mind the allocations - but you do, so not even those. You could use custom allocators, but that whole mechanism is totally broken design in the opinion of many; see Andrei Alexandrescu's talk about this subject: https://www.youtube.com/watch?v=LIb3L4vKZ7U


I think polymorphism has its place, and Lakos’s use of them for allocators is one such place: it lets you bind a well-defined interface to a concrete implementation at runtime. So rather than wrestle with templated allocators where every `std::vector` is a different static type, you can use `std::pmr::vector` and at a small runtime cost have huge runtime flexibility that (according to Lakos) can easily pay for itself.


Granted, std::pmr::vector has its place...

but - PMR allocators also have that weird fixation on types, which they should not; plus, dynamic allocation is expensive. It's particularly expensive to let your containers do reallocation willy-nilly on their own. I'd go with `std::array`, if I wanted something dynamic - a dynarray (which the standard doesn't even have).


Not really sure they are that useful for embedded systems. I think the most useful thing is in 20, and that is coroutines. For bare metal embedded systems this simplifies a lot of things. But isn't super common on embedded toolchains yet.


Async/await in no_std rust was a godsend for embedded firmware and driver development, turning really simple and linear code into a tiny state machine with zero alloc and no runtime cost. I can imagine how coroutines could provide similar for C++.

I migrated a rust LoRa driver from traditional blocking to async/await in order to test two modules in simultaneous send/receive mode as part of the test suite on a single STM32 chip. Aside from pleasing the HAL abstraction by bubbling all the generics throughout, it was an entirely pleasurable experience and made much better use of available resources than the traditional approaches without having to manually manage state across yield points or use a dynamic task scheduler. No OS, no runtime, no manual save/restore of stack, no global variables. It is really the future of the truly micro barebones embedded development.


I thought async/await creates a state machine in the background. If so, there should be a runtime cost.

State machines to deal with interrupts (to name the most basic kind of async event) is the ABC of embedded "bare metal" programming.

Whether async/await is cleaner, easier, readable, etc. is a matter of taste.


No, it compiles to a state machine (there is no “background”) and you just loop over the fixed list of futures/tasks in your main calling next/poll() to step through the states - there is no runtime so you have to do that yourself.


Yes, "compiles to state machine" is what I meant.

> you have to do that yourself

Yes. And it has a cost. Plus the cost of executing that state machine (changing states, etc.), which is hidden from you.

One can always pull a more optimized and fine-tuned state machine made by hand (sitting on WFI for example), and here comes the "taste" factor.


I’ve heard coroutines in C++20 were dead on arrival (especially on embedded) because it may liberally do heap allocations to store locals and there’s no way to control this. (Typical library implementations I’ve seen for lightweight coroutines/fibers seem to set the stack size for each coroutine as fixed…)


You can provide a custom implementation for "operator new" for the coroutine, so you could instead use some sort of preallocated buffer to store the coroutine frame (or some other custom memory management scheme), but yeah the design assumes that sometimes there will be a need to stash the state of the coroutine somewhere.


Oh what a shame! I just replied to the parent comment (before seeing your reply) about how incredible async/await “coroutines” were with embedded rust requiring actually zero runtime cost (no alloc, no runtime, no dynamic scheduler).


Not a C++ developer but nodiscard is gross in my opinion. I've seen codebases littered with it for no good reason. Why should you care if the caller uses the return value or not?


Honestly it's kind of useful as a customized warning to callers. Callers can still ignore it by, for example (void)funcall_with_nodiscard(args). But they have to explicitly declare the intent to ignore the result. That seems like an all-around fair construct.


> Why should you care if the caller uses the return value or not?

Nodiscard is a way of “enforcing” a contract with the user about how your code needs to be used in order to avoid undefined behaviour.

(I say “enforcing” in quotes because, instead of being an actual constraint, it’s merely an attribute - so a conforming compiler can happily ignore it)

Here are two examples of where it is useful:

- Returning a success code that must be checked before proceeding with an operation that could have bad consequences if the previous operation failed. Unchecked operations are one of the major sources of bugs in the wild, so no discard at least points the user to the potential problem.

- Returning something that doesn’t make any sense to immediately discard. This is usually down to a mistake - such as calling vector::empty thinking that it’s going to clear the vector, when it actually returns a bool telling you if it’s empty or not (an awful name, but then so is vector…). It makes no sense to check if it’s empty without using that result, so the warning indicates that the user has made a mistake.


Because a function can be assumed to work one or two ways and nodiscard can prevent you from assuming the wrong thing. Imagine a fake string::concat(other) that does what it says on the tin. You can assume it appends to s but what if it allocates a new str instead? You label it with nodiscard because there is literally zero purpose in ever calling this method but ignoring the result - if you ignore it, it’s as if it was never called (since the existing variables were not modified) except you paid the price with the allocations. So either use the result or drop the call.

Using it to enforce that every return value is handled is stupid as it can lead to error blindness and you’ll ignore it when you actually need it.


So that they don't misuse it. A good example is vec.empty(). There's no point calling vec.empty() without checking its return value. It helps distinguish noun vs verb.


This is really a mis-design; the proper name for the function should ask a question: `std::vector::is_empty()`.


When the return value can indicate that an error occurred, the caller can only know the function actually succeeded by checking that value.


[[nodiscard]] is there because people can't keep clear and empty straight. Which one empties the container and which one checks if it's empty?


Some functions are pointless without their return value, not using the return value is most likely a bug, and it is good to have a warning. If using the function without using the return value is not a bug, then there is a problem with your codebase.


std::clamp ; I got it exactly wrong and would have introduced a bug if the compiled didn't tell me I was discarding the return value. I thought it updated the value I wanted clamped. It does not. Don't know why I thought that, but I did.

The function is entirely useless and pointless to call without using the returned value, and [[nodiscard]] pointed this out to me. Perfect.


Forces your coworkers to check return codes


Everyone is mentioning error codes but I think a better use is when the function returns some allocated memory, where ignoring the return value would be a bug. I’m not convinced nodiscard is very useful, however.


If you're returning raw pointers in c++20 something has gone off the rails already. If the memory is wrapped in some kind of RAII structure, then why is it any more of a bug to ignore it than any other return value?


Despite what blogposts and HN seem to suggest, that is not how a lot of C++ code is written, for better or for worse.


I guess. My current project is spacecraft flight software targeting C++98 on GCC 4.3ish. Still we backported a slightly gimped `shared_ptr` and a much more gimped `unique_ptr` for the incredibly limited amount of dynamic allocation we do on startup.

I know there are a lot of people out there still writing C and I've come to accept that I will never understand or agree with those people, but to be writing C++ and passing raw pointers around in 2023 is a hell of a choice IMO.


There are many good uses, and I wish the `new` always required it.


Hey rust - check it out. Compile time evaluation without macros, isn’t that neat?



Not even close


TIL compile time evaluation without macros is not compile time evaluation without macros. Thank you for the insight.


That’s nothing compared to what Constexpr and other c++ features let you do. The conditional compilation example for instance.


Zig has shown the true power of constexpr in being able to provide a powerful construct that generalizes well.

After sum types (and pattern matching), constexpr is the next great language feature that will be everywhere soon.

I would love for Rust to fully adopt constexpr.


[flagged]


Named arguments in statically-typed languages is a naturally a hard thing to implement.

When used in code, it also results in many folding API branches such as Foo::new(port=8088, host="localhost") or Foo::new(pipe="./myfile") – Rust preferred idiom here would be Foo::new_listen("localhost", "8088) or Foo::new_pipe("./pipefile"). So at this point idiomatic Rust is preferred.

Also named parameters amplify your API naming fail surface (think misspellings!) beyond structs and impl functions... Like in "max_connexions" (sic), which is actually an unintended typo in one of the RFC proposals for naming parameters in Rust! It's hilarious.

Default parameters: my_port.unwrap_or(8080)

Variadics: useful, but needs to be carefully considered. There are workarounds now in nightly for C compatibility. There also concerns about how "..." and ".." syntax play out and incompatibilities on different implementation targets. And, yes, safety issues.

It's not that Rust people will mumble about safety. It's that it would be stupid to design a language all around safety and then implement every cool language feature in the world today without scrutinizing them for safety issues.


1. You don’t have to use those features if you don’t want to.

2. Every other modern language makes this work, only rust makes excuses.

3. Rust already has default generic type arguments and “named parameters” for struct constructors (which in fact are mandatory), for which the same criticisms also apply to.

Rust is already tedious enough to work with as is, and the last thing it needs is more builder pattern or manual name mangling. You already need mut/non mut for getters, sync/async, etc. just look at [serde](https://docs.rs/serde/latest/serde/trait.Deserializer.html) or polars or ndarray. Ugliest language ever.


What every other modern language makes this work? Dynamic languages? C++? Nope. Golang? Nope. Java? Nope. JS/Typescript? Nope.

Like I said, if the feature is not compatible with the core principles of the language maybe they shouldn't be in it. Look for the issues in Github you'll see how and why the feature is not in the language. No excuses. All debatable, but arguments do make sense. And go make your case there!

You absolutely don't need to do manual name mangling. It sure does have things in common with how struct literals work, and in fact you can use them right now with some of the common patterns for named parameters in Rust:

     foo( FooConfig::new().host("localhost").port(8080) );
  
     enum Param { 
         Listen { host: String, port: u16 },
         Pipe { file: String },
     }; 
     foo( Param::Listen { host: "locahost", port: 8080 } );
     foo( Param::Pipe { file: "./pipe".to_string() } );

    
Not that bad really, makes it clear what combination of params you can send. And nicely compatible with `match` patterns later in foo()! I absolutely despise this about named parameters:

     plot( y=100, z=300, x=200 ); --> the order of x,y,z makes it hard to read!
     plot( 200, 100, 300); --> much better! 

 
> You don’t have to use those features if you don’t want to.

Yeah, then we can have Rust bloated like C++, a untractable codebase and many ways to do it.


You don’t have to use the language if you don’t like it or if you don’t agree with its design decisions. It sounds like you don’t like anything about the language, so adding named parameters won’t win you over.


> Named arguments in statically-typed languages is a naturally a hard thing to implement.

So hard that Ada, Swift, C#, OCaml, Modula-3 have them.


Yes, and sometimes one order of magnitude slower compilation and significant larger memory footprint, and typically slower execution times than Rust.

Swift coincidentally has a slow compilation time thread today on HN's front page.

Ada, OCaml, Modula-3... come on, we're talking about a combined 100+ years development time. That's plenty of time to ship pink-laced kittens bolted into the syntax.

C# has dynamic typing and a lot of dynamic features are done by the ASP.NET runtime. Not comparable.


In what regards Ada, OCaml, Modula-3, it doesn't matter, moving goalposts.

C# named arguments don't have anything to do with COM dynamic support, learn about what you are actually talking about.

Swift slow compilation time doesn't have anything to do with named arguments, on top of that, contrary to Rust, has a stable ABI and support for native libraries, no need to compile the world from scratch as it happens with Rust. Again nothing to do with named arguments.


Basic features? This took c++ a long time to get this done. I like c++ but not sure what is the point being negative about other languages would you prefer no one to try new things?

They would accept new proposals for these if you want to contribute, or just continue complaining.


Yeah, every time... and right here you see the meaning of 'every time'


Ssssh...the HN Rust Brigade will hear you


Wrt hardware_destructive_interference_size, what happens if you compile for X86 which is then run on a machine (Rosetta) that has a (2x) bigger cacheline internally?


x86 (generalized: an ISA) doesn't have an inherent cacheline size, a given implementation of that ISA does. std::hardware_destructive_interference_size and friends are compile time constants, and are defined based not just on the target ISA, but the target implementation (see -march and -cpu for gcc and clang).


True, but the answer of the asked question is if you target a 64B cache line and run with a 128B line you may get more sharing with the consequent performance and scalability problems. Of course, if you are all that sensitive to such matters why are you running on an emulated machine? And Apple Silicon doesn’t offer many hardware threads anyway, so contentions problems are never very severe.


I'm not aware of any widespread X86 chips with a line size bigger than 64 bytes. The ISA also practically does have a minimum line size implied by the memory model, hands waving.

My point is that it varies at runtime but the type in the standard is constexpr so you can't actually rely on it unless you actually control where it executes.


It does sort of bring up the general question of why this is a compile time and not runtime constant. I doubt x86 will double its cache-line size any time soon, but if it did -- and people are running binaries with cache-padding at 64-bytes -- expected behaviour is going to differ. Not in a way that's going to make anybody lose their minds, mind you, but this kind of micro-optimization will just cease to be effective.

EDIT: naturally I understand that compile makes sense for e.g. statically sizing array sizes etc.


Raymond Chen discussed that in a relatively recent blog post [0]. The main points are that those constants are typically used to influence struct layouts/alignments, and those must be decided at compile time. The alternative is to generate multiple versions of a struct/other code with different alignments/layouts and choose among them at runtime, but that comes with other tradeoffs.

[0]: https://devblogs.microsoft.com/oldnewthing/20230424-00/?p=10...


99% of the time {constructive,destructive}_interference_size is used as a parameter to alignas, which necessarily takes a constant value. It would simply replace a lot of const size_t cache_size = 64 from user code making it slightly moreportable. Having a runtime value can be sometimes useful but it is beyond the scope of the feature.

C++11 std::atomic had similar scope creep where it had atomic::is_lock_free as a runtime parameter. Nobody ever used it as is simply not something you care at runtime. So C++17 added is_always_lock_free as a compile time query which can be actually actionable.


> I doubt x86 will double its cache-line size any time soon

Well, why not? They doubled it from 32B to 64B between the Pentium III and Pentium IV.


I don't know enough about CPU design to say really but Pentium IV is a long time ago now. And since then, I suspect that a lot of assumptions about 64B line sizes have been baked in.

Apple could go to 128 because they were rolling out a whole new ISA, so were breaking compat anyways.

That said, they have amazing performance on the M1, and I wonder how much of that has to do with the wider L1 size.


You get potentially poorer cache performance because your alignment value is wrong. Depending on how you used the value, I suppose.


So much of modern c++ is trying to get around just using the preprocessor.


The preprocessor is such a botch (charitably, state of the art 1975). As Stroustrup said to me once in a discussion on the topic, "Once an ecological niche has been colonized it's almost impossible to clean up, so you have to work around it."


Yeah, but they're trying to replace it with more botch. "If constexpr can be used to precompute statically-determinable logic, except except except..."


Constexpr can replace most per-processor macros in a subset of C++. This is a huge improvement over preprocessor macros and functions which are very difficult to refactor. Even if it turns out to be 'botch', we can reason about the 'botch' and find ways to mitigate problems with it through static analysis.


> So much of modern c++ is trying to get around just using the preprocessor.

...and that's a good thing


We already have the preprocessor, and additional abstractions carry more than linear cost. I'm dissatisfied with these half-measure patches on static-time code evaluation.


What makes them half measures and not full measures?


Mostly the fact that the preprocessor doesn't go away. I still need it for the include system, I still need it for string interpolation, the file and line macros...


> I still need it for the include system

The goal is to get rid of this using C++20 modules (and potentially reduce compile times due to repeated expansion of header files).

> the file and line macros

This is addressed by C++20's std::source_location [0]

> I still need it for string interpolation

If you mean the `#` macro operator, then yeah that is still needed.

The point is to provide ways of doing these tasks that don't require a weird text replacement macro language and instead do them with C++. Obviously due to legacy codebases the preprocessor is going to stick around.

[0] https://en.cppreference.com/w/cpp/utility/source_location


Yes. My biggest frustration with c++ as a language is that there are often two ways to do things: the right way, and the way you will find in tutorials and codebases because the right way didn't exist yet when they were written. This means the cognitive load to use the language in any context that isn't "greenfield project with experts" is significant relative to other languages.


constexpr is pretty different then include and line number (which are addressed by other features).

IDK what to say....a 40+ year old language has some warts and it has old code using those warts.

Doesn't mean you have to be stuck actually using them forever and ever


> Doesn't mean you have to be stuck actually using them forever and ever

If I'm writing new code, probably not. But since it's a 40+ year old language, I'm doing more comprehension, maintenance, and extension of old code than writing new, and every new feature makes that task harder over time.


And for good reason?


maybe. the preprocessor is much simpler to understand than constexpr and friends


Not if you're a compiler or a static analysis tool...


C++ should finish the job and deprecate the preprocessor.


To be honest, if that happened I'd be 100% on board. My issue is combinatoric complexity, not a new way to do things. I'm not frustrated that we have constexpr; I'm frustrated I now have to worry about how it interacts with macros and namespaces and lifetimes and oh what if the values are templates and so on.

Each feature adds superlinear cost to understanding the language and the code you're reading.


Modules get rid of includes and includes guards.

Unfortunately the preprocessor is still needed for stringyfication and other symbol manipulation.

C++26 (2126 of course) will introduce enough reflection to finally get rid of the preprocessor.


I added enough to D that it didn't need a preprocessor, yet could do all those things.


I'm sure that D compile time facilities can do it all right. The issue is that the C++ bits (compile-time introspection) will never be standardized.


I really need to get around to looking into D.


When D showed up I really wanted it to become mainstream language. Unfortunately it did not.


Doesn't mean you can't use it. Lots of people do.


It does, because many delivery projects chose languages based on the product SDKs, so if it isn't in the box, they don't come into the discussion at all.

Also in the meantime, most of the cool stuff in D is showing up on the languages that come on those SDKs (C++, Java, C#), weakening the argument to look outside of SDK supported languages.

Doesn't matter if D had it first, or if it has a better implementation, worse is better, when ecosystem, tooling, IDE and technical support are part of the equation.

So it remains a language for hobby coding.


D is shipped as part of the Gnu compiler project.

> most of the cool stuff in D is showing up on the languages

It's true that many aspects of D are copied by other languages, but badly.


At most GCC could be considered as a piece of the Linux SDK, but that wasn't the kind of SDK I was talking about, rather a full end to end product SDK.

Badly copied doesn't matter, it is one reason less to look for where it came from.


All of my many products are commercial. I mostly have freedom to choose language. But I am also my own software development company and I have never felt brave enough to leave my client/s with the mountain of D code and no one to help after I went to develop next product somewhere else. This is very unfortunate.


`if constexpr` is such a disaster. They were so close to getting it right (not introducing a scope) but they missed.

Similarly constexpr itself is also genuinely ridiculous: (I have said this on hackernews before) It's such a stupid idea to require an annotation everywhere you want to evaluate things at compile time, practically everything will inevitably be evaluatable at compile time, and you need the implementation anyway, so just let it fail rather than ask for permission everywhere.

Having the keyword for variables and constants is fine (i.e. top down and bottom up constraints need to be dictated) but you shouldn't need to write constexpr more than that.


> It's such a stupid idea to require an annotation everywhere you want to evaluate things at compile time, practically everything will inevitably be evaluatable at compile time

And making constexpr-ness an explicit contract makes sense to me: if it's not that it can be an unexpected property, and can break at any change of implementation.

Yes requiring a function being marked requires that the implementor do it, but it also means they have considered this use-case and made it officially part of the API. It's not a trivial promise to make.

> and you need the implementation anyway, so just let it fail rather than ask for permission everywhere.

"Let it fail" is the issue, if it's implicit a user can assume this is working by design, then find their program stops compiling on the next release not because the maintainer wilfully broke the API, but because they changed an implementation detail of the function and constexpr-ness is not something they considered (or assumed correct) in the first place.

Maybe the language should work the other way around and everything should be constexpr by default and functions should opt out of constexpr-ness, but that's 40 years too late for C++. And I can't think of any langage which does that. And frankly it feels like the wrong default for the reasons above.


Except constexpr isn't a guarantee. Compilers can choose to silently evaluate constexpr things at runtime. And I have ran into this before: a compiler with a recursion limit causing it to bail on constexpr and just emit the code.

So constexpr isn't a guarantee of it being evaluated at compile time, and non-constexpr isn't a guarantee of it being evaluated at runtime. Cool, huh?


> constexpr isn't a guarantee of it being evaluated at compile time

constexpr is a guarantee that you can use the thing in a constexpr context, and this is where the "evaluated at compile-time" guarantee can come from:

    template<typename T>
    auto func() {
      // here some compilers can still choose to evaluate x at run-time - and very likely all of them if no optimizations are enabled
      constexpr int x = f(); 

      // but here it becomes mandatory for this use of x to be evaluated at compile-time, since the number is literally going to be part of the compiled binary as part of the function name mangling
      return std::integral_constant<int, x>{};
    }


Constexpr does not guarantee that a constexpr function can be called at compile time, unfortunately. Only that it can be called at runtime for a subset of all possible parameters.

Except of course the subset of parameters for which it is constexpr callable can't be checked at function definition time (if it exists at all), only at function invocation time.

Which makes the constexpr annotation useless and it is in only because the authors couldn't otherwise get the paper through some committee objections.


Is such a mangling mandated by the standard?


Mangling is mentioned in the standard.

In this case though the underlying reason is that its part of the type (system) not because of the mangle specifically.


> Mangling is mentioned in the standard.

Forgive me, but can you be clearer than "mentioned"? Is the mangling required to contain template parameters for return types?

> In this case though the underlying reason is that its part of the type (system) not because of the mangle specifically.

I'm not sure. The compiler knows it will always be the same type, so under many uses of this function I could easily imagine a compiler that doesn't actually fill in .value until runtime.


> Forgive me, but can you be clearer than "mentioned"? Is the mangling required to contain template parameters for return types?

The mangling will contain template parameters, as you can have:

foo.hpp

    template<typename T>
    T f();
foo.cpp

    template<>
    int f<int>() { return 123; }
    template<>
    float f<float>() { return 123; }
bar.cpp

    std::cout << f<int>();
and the right function has to be found. demo: https://gcc.godbolt.org/z/rMjYoEzaK


Sorry, I meant parameters that only are being returned, and not passed in.

So in the example several layers above, T uniquely identifies the function. I don't see any need to involve std::integral_constant<int, x> in the mangling.



Mangling isn't technically required as per se but it's by far the most common approach (basically a local minimum in terms of cost)

I think this subset of C++ templates is probably undecidable still.


Mostly makes sense to me: constexpr means the function is evaluatable at compile-time, being callable at runtime is part of the contract and why adding constexpr does not change the API.

For an assertion that something is evaluated at compile time I’d assume a variable-level annotation (in a non-constexpr function). Though I don’t know c++ enough to have any idea whether that’s the case.

And contant evaluation optimisation has always been a thing so it’s not really surprising.


> Mostly makes sense to me: constexpr means the function is evaluatable at compile-time, being callable at runtime is part of the contract and why adding constexpr does not change the API.

The problem isn't that you're able to call it at runtime with runtime data, it's that if you give it compile-time data you have no idea when it will be run.

> For an assertion that something is evaluated at compile time I’d assume a variable-level annotation (in a non-constexpr function). Though I don’t know c++ enough to have any idea whether that’s the case.

Oh, were you under the impression that constexpr was just for functions? It applies to variables too, and it's not a guarantee on them. You need to use other, newer annotations.


C++20 has consteval and constinit to fix this (at the cost of making things even more complicated).


Amazing.


> I can't think of any langage which does that

D does. And it's a very popular feature. In fact, D goes even further - only the path taken through a function needs to conform when running it at compile time, not 100% of the function.

> it feels like the wrong default for the reasons above.

In about 16 years of very extensive use, it has never been brought up as a problem.


The other problem with implicit constexpr is that it can change the timing characteristics of your code depending on whether your input data is known at compile time. And then if you want it to be constexpr you have to troubleshoot why it sometimes isn’t. I think this approach of maximal control in exchange for some foot guns is very much in keeping with C++.


In D you don't have to troubleshoot it. The compiler will tell you where the problem is if it can't be evaluated at compile time.

> for some foot guns

There just aren't any with this feature.


> Maybe the language should work the other way around and everything should be constexpr by default and functions should opt out of constexpr-ness, but that's 40 years too late for C++.

That's how lambdas behave in C++, so it's definitely feasible. gcc has a flag, -fimplicit-constexpr I think, that does this for regular functions, and it doesn't appear to cause any significant issues. I think there has been some talk around making this the language behavior at some point in the future.


> It's not a trivial promise to make.

It's actually very trivial. 99% of code I write can be run at compile time.

> maintainer wilfully broke the API

Did they break the API or did they change the semantics? The API is tittle-tattle, the point is that you cannot run it at compile time anymore (e.g. accidentally introduced a opaque new dependency by accident? Oops)

> And I can't think of any langage which does that

Being able to opt out is not what you want, but D allows everything to at least attempt to be run at compile time. This has been very profitable.


> 99% of code I write can be run at compile time.

Good for you, you missed the point.

> Did they break the API or did they change the semantics?

What are you even on about?

> The API is tittle-tattle

What are you even on about, part 2.

> the point is that you cannot run it at compile time anymore

Yes, the API allowed one thing, now it does not. That's no different from changing an optional parameter to mandatory or any other thing which breaks the API.


> Good for you, you missed the point.

>> Did they break the API or did they change the semantics?

> What are you even on about?

Constructive.


constexpr is great. I can finally do most of the compile time computation that previously required template metaprogramming, and it's much more readable by comparison. C++17 makes it much more ergonomic to use too over C++11. I can't wait for us to finally upgrade our toolchain to take advantage of C++20.

If you're in embedded and you're not pushing everything you can into constexpr, you're missing out on correctness and code size benefits.


Doing stuff at compile time is good (although the compiler can do 99% of it anyway its nice to have guarantees), C++ just got it wrong.


It could certainly be better, but in a world where my options are C, C++, or assembly, I pick C++ every day.

I do keep hoping for something else to get big enough we can use it because C++ sucks, but is still better than the alternatives we can use.


Virgil doesn't have all the backends for the diversity of embedded ISAs out there, but the first version compiled to C and ran in as little as dozens of bytes of memory (on 8-bit AVR). Nowadays I am not doing embedded systems, otherwise I'd write more backends.


C++ is much better than C. Personally I use D everywhere I would've previously used C++.


> `if constexpr` is such a disaster.

to me it's been a very useful tool for reflection, for instance

    if constexpr (requires { foo.someMember; }) { 
      use(foo.hasSomeMember);
    } else { 
      some_fallback_case();
    }


This is "design by introspection". It works a lot better if you can

1. Do this in types. 2. Do this without introducing a new scope inside a function.


I would absolutely hate `constexpr` to be implicit and left to be decided by compiler.


How D decides it is if the grammar says its a constant-expression, whatever makes up the constant-expression gets evaluated at compile time - functions and all. For example,

    int square(int x) { return x * x; }

    static assert(square(3) == 9);
This allows one to write unit tests that are checked at compile time, whereas:

    int main() { assert(square(3) == 9); }
is always a runtime check.

Compile time unit testing is a significant win:

1. it's always more productive to find problems at compile time rather than run time

2. it isn't necessary to conditionally turn off compilation of the tests for the release build

3. you can't forget to run the tests before shipping


What if it was always allowed to do compile time, but you could add a keyword to assert it did (like force-inline).

That way you get benefits by default. Whereas right now it's not allowed to.


These are two completely different things.

One is the provider of the API promising that the function supports compile-time evaluation.

The second is the consumer of the API ensuring that an expression has been compile-time evaluated. It's nice that the consumer can make sure, but that's not helpful if the provider of the API never specifically intended for a function to be evaluatable at compile-time.

> Whereas right now it's not allowed to.

Of course it's allowed, but you're at the mercy of the compiler's decisions.


> Of course it's allowed,

It's not. If you to use a function in a constexpr context then it will complain it needs to be marked as such. So the API creator needs to label every function in case a client wants to use it.


It is decided by the compiler.

Note that I mean at the declaration not the callsite.


Disaster? Just because it introduces a scope? Please elaborate. A scope is the right thing to do, every other {} introduce a scope, so if this wouldn't have been consistent with the normal 'if' that would have been extra confusing.

Maybe you'd like to do something like:

    template<typename T>
    struct ABC {
       int x;
       int y;
       if constexpr (is_3d<T>) {
          int z;
        }
     };
But yeah, that's not what if constexpr does.


That’s exactly what it could have allowed, and become a much more powerful construct in the language


Which is why it's crap => back to macros we go.


They should just skip to the end and make the entire language legal to execute at compile-time. Virgil does this and many Lisps before it. You then get a heap against which the entire program can be optimized at compile time. C++ is slowly catching up to Virgil I for microcontrollers, ala the 2006 paper.


The reason why you want it to be part of the API contract is to avoid future breaks.

If you publish a library as "constexpr" you are indicating to your users that they can use it in a compile-time context and that future changes to your implementation will remain compile-time executable. If you just say "anything that can be computed at compile time gets auto-deduced to constexpr" then you rely on some library that is compile-time executable by coincidence but you really really really need it to be compile-time executable. Now when that library owner makes an edit that means it cannot be executed at runtime your code breaks.


The example in the article is a little weird, too. Putting aside the weirdness of the intended purpose of the example, what's wrong with function overloading? I think the overloaded version is a lot easier to read. With the `if constexpr` version, now I have to edit the one function with new cases if I add a type that doesn't fit the "number or string" pattern, whereas with overloading I just implement a new `length` function.


The example is weak but a good thing to look into is https://dconf.org/2017/talks/alexandrescu.pdf i.e. this type of compile time branching (when implemented properly as mentioned above) lets you write code that "reacts" to other code in the project's capabilities. Allows some very nice patterns within a C++ flavoured type system.


>Having the keyword for variables and constants is fine (i.e. top down and bottom up constraints need to be dictated) but you shouldn't need to write constexpr more than that.

This.

Constexpr should have been at the eval site, i.e something like:

  consteval auto x = foo();
And foo() is just a normal function, if the compiler can eval it at compile time - all good - if not, compiler error.


I wish constexpr actually asked the compiler to evaluate an expression at compile time and not just a statement.


In D, every part of the grammar that is a constant-expression must be compiled at compile time. For example:

    void test() {

    int square(int y) { return y * y; }
    int x = square(3);  // evaluate at square at run time
    enum y = square(3); // evaluate at square compile time

    }
(Although, when the optimizer gets through inlining square(), etc., it will wind up at compile time anyway.)


I think a compliant c++ implementation is allowed to do this and I think all do so at least for straightforward cases.


It's allowed to but you aren't allowed to assume.


What compiler is he/she using? I cannot get this to compile at all but I might not know the magic compiler incantation to get it to work.

  template<typename T>
  auto length(const T& value) noexcept {
      if constexpr (std::integral<T>::value) { // is number
          return value;
      }
      else {
          return value.length();
      }
  }


Not related to your post, but it was marked dead, then I looked at your comment history and didn't notice anything, but many of your comments were marked dead. I "vouched" for this one. It looks like you angered someone and they have been flagging all of your posts.



Yeah I was using godbolt too and wondered why I could not get it to compile. Just funny the article is about c++17 but you need c++20 on there to get it to compile.

My c++ is super old. But thanks that is great I could not figure it out.


It should work in C++17 too, the compile settings are just left over from something else I was doing on CE.


std::integral was added in C++20 , so this example will only work in C++20. Weird that the author used this example.

https://en.cppreference.com/w/cpp/concepts/integral


In this case, I think it's a typo, and should be std::is_integral:

https://en.cppreference.com/w/cpp/types/is_integral


How do you call it and what is your error message?


I'm sure other languages would have more than 17 useful features.


Is it "more than 17 features" or more "features than C++17"? For a programmer your way of formulating questions is a strange one.


Whoosh!


The whole discussion about constexpr (even though a useful feature), is one example, out of many, of what a f up language C++ is/has become. It's astonishing how many people have to say... I don't like the language but there's no good alternative for the context we are working in.


Especially with embedded applications, the issue is lack of tooling and sluggish vendors. For them, even C++ is "too new" in some cases to properly support.

Rust is making good progress despite these issues but it's still a pain to use in anything but the most common systems. As soon as you're using SoCs that have an FPGA part, for example, you're forced to use proprietary vendor tools and good luck getting Rust to work with those in the next few decades...


TBH Rust in the deep embedded space -- like MCUs, not MPUs -- when I looked at it a bit ago, it was fairly dubious how much intense effort would be required to a) trim fat to get reasonable binary sizes b) keep the fat under control. I'm used to doing much of this with C++, but it seemed way easier to end up with a bloated binary in Rust <I'm looking at you, panic handler!>

Starting a job in a week which is embedded Rust, but more on the Linux-on-SoC side, we'll see it goes.


I do embedded development in rust on severely limited/constrained MCUs and agree with that being the biggest hurdle. There are resources to detect any panics in the resulting binary and bail as well as the compiler option to halt rather than unwind on panic. Together with a custom panic handler you can get most of that in check. Oh and you really have to compile in release mode or a profile derived from it - debug is just too damn big.

The bigger issue is that you are at the whims of llvm as to what your codegen looks like. Regressions as llvm versions are upgraded are routine and can sometimes result in significant regressions on micro benchmarks. The good news is the rust team treats these pretty seriously even if they do take time (and several versions) to get fixed but you’ll never see the issue closed out in the issue tracker. All of performance regressions, codegen bloat regressions, and llvm optimization failures (eg not stripping out a panic handler in safe rust when it should be possible to deduce that the panic cannot be reached) are all tagged and monitored and tests are often added to catch regressions once fixes land. The good news is that when you run into these in safe rust you can usually work around them with unsafe rust - unreachable_unchecked() is your best friend here and is amazingly easy to use either with an if or with a match to disavow a potential state before the code that handles it.


I’d suspect I has to do more with familiarity than it does intrinsic differences between the two. My team does this kind of work, and it’s not difficult. It is worth tracking, paying attention to, and making changes if you’ve done something wrong, but what things cause these issues is generally pretty straightforward.


the only astonishing thing is how pathetic rustafarians are. Whenever someone writes something positive about a c++ feature the squad is working extra time to diminish the article/opinion. You are trying so hard, lol.... "constexptr if" is brilliant, simplifies a lot template code.


> of what a f up language C++ is/has become

I believe this misconception on your part stems from a lack of awareness (or understanding) of C++ design goals as a language.

Have a look at this writeup: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p21... (caveat: not formally adopted)

constexpr supports the following goals:

* Provide the programmer control over every aspect of performance: Compile-time vs run-time execution is an aspect of performance.

* Excellent ergonomic: Using constexpr/consteval improves on earlier mechanisms for achieving compile-time evaluation, such as macros, template meta-programming etc. Those are unwieldy, often difficult to read, and inflexible (without jumping through hoops). constexpr is not.

etc. etc.

and marginally supports the following goals:

* Code should perform predictably: Control over what exactly runs at run-time improves predictability.

* Leave no room for a lower level language: While constexpr is neither low nor high as a language feature, one can argue that having it undercuts the potential for, say, a "C + constexpr"-like language, or another lower-level language with a more elaborate macro system.

etc. etc.

I could go on naming goals constexpr supports, there are lots more. I challenge you to find goals from which it detracts.


> (caveat: not formally adopted)

Not only was P2137 "not formally adopted" it was purposefully written so that the committee could say explicitly "No" to these goals. It's in some sense a manifesto for one of the 2022 "C++ successor" languages, Carbon not for C++.

If you want to cite actual goals, you won't get much from WG21, which prefers not to get tied down by having any principles to speak of. Bjarne himself however wrote a book in 1994 "Design & Evolution of C++" which might help you determine some goals.

I don't see constexpr functions in particular as a success. There are too many caveats, it's easy to find C++ programmers who've written what they assume is a constant evaluated expression but it isn't, for one of many reasons their compiler has decided it's runtime evaluated, and the programmer was surprised.


The reason the standard doesn't guarantee constant evaluation of constexpr is that it would be hard, because of the as-if rule, to specify it in an good way other than a non-normative comment. Remember that as long as the sequence of side effects is preserved (and even that, concurrency introduce non-determinism), any translation is compliant (even no translation in case of interpretation).

In practice you do get guaranteed evaluation wherever the result is required at compile time (for example as a non-type template parameter or as an array size).


Your first point is important, and now that you mention it, I do remember similar names from the Carbon introduction and this paper. I should probably look to D&E for a less presumptive set of goals.

As for constexpr - you only get ambiguity if you assign a constexpr expression/function call to a non-constexpr value. If you use constexpr variables to hold the results of your computation, compile-time evaluation is guaranteed. I don't think it's that confusing.


I wrote that constexpr is useful. In fact that is just my point, a useful feature brings controverse. My take on it is that it is because the language is f up.

constexpr, amongst other uses, cleans up code that was hard-to-understand templated code.

Static assert cleans up other complicated constructs.

Make_unique together with heaps of other primitives tries to clean up cumbersome memory management.

[[fallthrough]] tries to solve the fact that break is easily forgotten, but this time not forgotten, but explicitly not there...

You can name so many of new constructs that solve an earlier problem.

If you went through all the cpp standards upgrades, and knew the history, you understand why all these improvements exist. If not, it all looks overly complicated, which in fact, if you look at it objectively, it is.


Nim works well in embedded contexts. It can compile to C89 or C++ and integrates easily with most any C compiler. I’ve used it with FreeRTOS, Zephyr, and bare metal. It’s a smallish community but there’s a few shipping products using it and the language is really easy to learn if you know C/C++/Python.

Executables from Nim also tend to stay pretty lightweight even when using generics / templates.


The language is not fucked up. Come back to Rust in 40 years and see what has become of it if it still not Cobol type legacy. Also I do not think you can compare it to C++, it is too limited for that in my opinion. I would rather consider it as "fixed" C that satisfies some safety constraints.

What is fucked up is us not being perfect machines and not producing ideal design from the first try and then having to deal with compatibility. Well duh.


Rust. The alternative is Rust.


I don't think Rust has an equivalent to C++17 constexpr, it is more limited.

Zig may be a better option when it comes to compile-time evaluation.


The `if` part of the quoted example, you can do in Rust; the `else` part is coming with the specialization feature soon. Everything you can't do with traits, including specialization, you should probably not be doing; modern C++ tends to be an incomprehensible template forest.


I hope you will be proven right!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: