Hacker News new | past | comments | ask | show | jobs | submit login
C++20 Design Is Complete: Modules, Coroutines in C++20 (reddit.com)
238 points by ingve 59 days ago | hide | past | web | favorite | 197 comments

I am still trying to understand when T&& is a r-value and when it isn't.


I feel I will never master C++ -- neither will Stroustrap.


It seems to be that people seem to apply a higher bar to being a master of C++ than they do to other languages.

Yes, it's possible to come up with "gotcha" questions that are hard to answer for C++. There are dark corners of any language, C++ has more than many to be sure.

Yet it seems that C++ detractors love to roam around in these dark corners far more often than actual serious users of the language do. It's very, very rare that I have run into a misunderstanding of the language that has caused a bug. What you almost always get is a compile error, which in fact, is exactly what you want.

It's not the new features causing the bugs, it's the old ones.

It's uninitialised variables, dangling pointers, undefined behavior, etc etc etc.

And guess what, it's the new C++ features that are making those kinds of problems rarer and rarer as time goes on. It's starting to get pretty damn hard to use-after-free or leak memory.

I love to bring up Python examples, as it is thought of being a beginners friendly language, yet many seem unaware how big the language, standard library and behaviour changes across releases actually are.

My biggest issue with python is distribution: it’s a much worse headache to try to ensure your product works on other people’s machines (Python versions, dependencies, and anything with Cython just makes things worse) than just requiring a C++14-compliant compiler.

Moreover the sources written for versions >= 3 are serioosly not guaranteed to work on different 3.x versions.

Yeah, I'm learning Python now and was working through the book Learning Python which covers Python 2 and 3 there are plenty of differences to trip you up.

I used to be on the c++ committee and contributed to both gcc and clang. That T&& r-value mess, along with c++ still having no built in method to print a vector, are the straws which persuaded me to abandon c++. It seems designed to be confusing.

> c++ still having no built in method to print a vector

This a thousand times. All the best languages have a good way to print out a structure, and allow interactive inspection. Instead with C and C++ we have to suffer with debuggers rather than just using good old print statements.

Enjoying the productivity of debuggers since mid-90's, with ability to graphically display data structures.

Printf debbuging is the actual suffering.

I think both are good tools and in many cases using printf or writing to a log file are very good options instead of loading a debugger.

That is what trace points allow for, without having to touch the original source code.

Nah, sometimes using a debug build changes too many things, and being able to debug a release build is still valuable.

Who said you need a debug build?

You think that print debugging is wrong and debuggers are the way to go?


There is a time and space for print debugging or its equivalents, like when debugging very low level code, or when debugging rarely occurring problems.

However, using a real debugger is extremely powerful when it's available.

Definitely, printf debugging is a tool left for the few cases where a proper debugger, jtags, intelitrace, dtrace, jmx, flight recorder, and other similar technology isn't available.

With a graphical debugger, how do you produce a log-trace of what your program did?

How do you show a table of just the information you need, to help you find a bug? What if the table is a join of multiple datastructures from distinct locations?

How do you, after stepping forward through the code until you reach a certain configuration, make a small modification to the logic, so that you can replay from the start with the intent to reach a slightly modified configuration?

Are graphical debuggers on par with simply print debugging to notify when a particularly complicated situation occurs that can only be described by a complex logic?

I mentioned already a few possibilities, as I was expecting that question.

Edit: Since you edited your question after I replied, all those use cases are still covered by intellitrace, dtrace, flight recorder, jtags, with the benefit of not having to recompile the code.

Naturally you can come up with wicked examples that those tools don't cover, then again they will be most likely irrelevant for the large majority of devs.

Which raises the question, how are these not print debugging? How are they better than print debugging?

I admit I don't know any of these tools, but I hope (1) they don't introduce another language than the one the project is coded in, (2) they let you put the logging statements inline in the code since otherwise the debug setup would be hard to synchronize to the actual code.

The situations I described are not particularly wicked, but rather normal debugging. If the bug is not trivial (in which case an interactive debugger can help finding it quickly), then the process is always to start with a global but coarse view, decide where the bug likely is, and zoom in on that partial view, requesting more details.

I doubt any automated system can help much there. Automatic structure printing might already not work for many cases since it prints too much data, or formats the data in a way that is too mechanical and not fitted to the task. Or in case of object-languages, the system does not now how "deep" it should print these structures.

I think automatic structure printing is a nice to have for simple situations with few structure fields. In practice I'm almost always fine with typing a printf line manually. I don't think that typing and modifying the line as I go would take the majority of my debugging time.

I use interactive debuggers myself, and I think they are faster in simple cases since there is no edit-compile-rerun cycle. But like all interactive GUIs, they are not (or only badly?) scriptable and automizable.

They are better than print debugging because they allow to even instruct applications in production without having to do any change to the application.

Stuff like DTrace and Intellitrace have OS level support to integrate with the application being instrumented, execute whatever actions are required and interoperate with the debugger. Intellitrace even goes as further as supporting time travel debugging.

Yes some of them, e.g. DTrace, do have their own little scripting languages.

c++20 gets a new formatting library based on fmtlib which is able to print data structures.

    std::vector<int> v = {1, 2, 3, 4}

    std::copy(v.begin(), v.end(), std::ostream_iterator<int>(std::cout,” “));
Kinda long but gets the job done.

Now do it when v is a vector of vectors.

A single template implementation of vector allows vectors of vectors in the first place. Just add a single template print function.

The actually confusing part is that && has two different meanings:

Rvalue reference, the passed value either has no memory location yet or has a memory location that won't be accessed anymore (e.g. Marked with std::move)

     void foo(int&& a)
Universal reference. `a` is inferred as an rvalue reference if it is initialized with an rvalue and a normal reference otherwise:

    template <class T>  void foo(T&& a)
So && with type deduction tries to forward rvalue-ness. It does a terrible job, though, which is why std::forward exists.

Side steps the perfect forwarding part.

I thought the post was kind of long enough as is. :)

Wonderful news! I don't use C++, but I appreciate how at the end of the day, C++ will always be with us, and so will its benchmarks/expectations, battle-tested tooling ecosystem, and generously bestowed developer power. Makes the rest of us strive that much harder :)

I'm not. Just like I'm ok with VHS and scan lines becoming niche markets, we need to push forward in programming language design. We've come a long way in the past 35 years; lessons have been learned, the field of programming language theory has progressed, and we're ready to iterate. C++ has been retrofitted, but like Java, it's commitment to backwards compatibility has ballooned complexity to an unsustainable point, and there are simply too many things that will never be fixed, and C compatibility will never be dropped.

We need to forge forward, and support the next generation of programming languages.

Well that's just like, your opinion, man.

In all seriousness, C and C++ are with us forever. Improving and moving these languages forward, and improving their safety with static and runtime analysis (and hardware support) has to be part of the plan moving forward.

To me, that's a very bleak outlook, and as I already mentioned, retrofitting C++ will never solve all of these issues; retrofits are always additive, and simply increase the complexity of the compiler and specification to the point that very few people actually understand the majority of C++'s semantics.

Retrofitting also doesn't solve the philosophy of a language. There are simply aspects of C++ that are too vital to its identity to change. Is object oriented programming the final word on abstraction in programming? Probably not. This is a young field, and there are always better solutions lurking around the corner. When we buy into the notion that something should live forever, we rob ourselves of the opportunity to move forward, or to at least know with certainty whether something is truly the best.

Unless we want to repeat what has happened with COBOL, where the systems have lasted forever to the point that all of the COBOL programmers are dead or retired, we need to start evolving our philosophy to favor language replace-ability, or stop guaranteeing backwards compatibility. The latter is untenable to most businesses, while the former can be achieved through the use of small services, FFIs, RPC, and system modularity.

C++ does remove deprecated features, but so far there has been no reason to deprecate OOP. The fundamentalist viewpoint you're espousing is not particularly convincing; in particular, the amount of labor required to kill and replace working systems for aesthetic reasons (as opposed to evolving them) would remove much of the free time required to create new and innovative technology.

Which is why I also said that we need to evolve our thinking towards replaceable systems. As systems grow, the pace of innovation slows as it becomes cumbersome to make changes, and the barrier to entry into the code base rises. With that in mind, we are well served to replace programming languages as a means of maintaining constant velocity in our development.

I don't claim that existing systems are easy to replace, but I do posit that they can be made replaceable given that certain practices are adopted, and the mindset of the developers is that they system should be easy to remove and replace.

What languages in particular are you suggesting we move towards? Rust is getting there, but it's still evolving rapidly with features. That's about it. Any HN-favorite, such as Go, is not a valid replacement.

I'm personally behind Rust and Go, but I don't want that conflated with the point I'm trying to make about facilitating code replace-ability. We shouldn't move towards any language in particular, but instead facilitate interoperability of languages, focus on building services instead of libraries, and size services such that they can be easily replaced. A business doesn't need to use a different language per developer, but they should expect to phase out languages periodically, and accept more rewrites and make more consideration about the right tool for the job when greenfielding that jumping to some mandated language like Java.

I also wouldn't say that Go is invalid, Go is more than production ready; it's actually in production in critical infrastructure today at scale.

Go is not valid because of performance reasons, mostly. I work in embedded, and besides the client apps, c/c++ dominate in the performance space. I think it can, and has replaced c++ in other areas, but it still has a long way to go. The C interoperability was extremely limited last I used it, and that's a big way to get people to move.

Even if every single C++ programmer stopped using virtual functions today (which is never going to happen), the OOP subset of C++ is not going to be removed.

My stance is that "the majority of C++'s semantics" can also evolve with time, and one of C++'s the core design philosophy is to make core as minimal as possible so that the language can be evolved by how people use it rather than how core language specification changes. Examples include robust implementations of Entity Component System, user-defined function classes and expression templates developed prior to C++11.

Moreover, COBOL story just reminds me that there are still many cases that code should be maintained forever, and there is no unicorns to solve all problems like a magic. RPC introduces performance and mental overhead. FFI is limited by performance/vague boundaries. Even in backend development, there are lots of companies switching back to monolithic application after trying microservices.

I' don't think that we should force C++ to be something it's not (which has been a monumental task), and C++ will never remove its shackle to C compatibility. Moreover, languages are also defined by what they do not have, and this is very different from the current mindset of fixing C++ by adding "missing" features. Languages like Rust or Go that are designed from the ground up for a focused purpose look and feel starkly different from a garden like C++.

It's also unfair to dismiss RPC or FFI, which have come a considerably long distance, and in many cases add no noticeable overhead. Today's networks are now approaching 400G speeds, and Linux has added considerably more interfaces such as eBGP bytecode that rewards a cross language mindset with better performance.

I totally agree on your points in the first paragraph, but sometimes the boundary between suitable and unsuitable tasks is quite vague, and cross-language development introduces new complexity. I mean if something is 2x in other languages, I will simply use it, but if it's only 1.2x then I will stick to current solution.

The problem of RPC is latency & increased complexity, not throughput. It's still hard to achieve native performance when passing objects between FFI boundary to avoid a copy, especially when using stub generators.

Choosing a language is a difficult task, I agree. It is often qualitative improvements that are used to argue for new languages, which most engineers are skeptical of, as they tend to favor quantitative needle moving. So it's difficult to determine if something is 2x.

I choose languages through a series of criteria. First, based on my principles, which limits me to languages were safety, efficiency, and expressiveness are highly valued. I find that most languages have some core principles listed on their website. Then I take my software's requirements, and find a language in my subset of choices that is a best fit. If I am on a team, then we have to come to a consensus on the teams shared principles, which I find is very useful outside of choosing a language.

I think that services can be sized sufficiently large to be replaceable and still encapsulate most critical code paths such that sharing memory should be a rare requirement. Facebook and google are both examples of companies operating at hyper scale based on RPC service architecture. For those times were it is a requirement, I can't argue that RPC is a good choice. C is a great lingua franca, and most languages support its ABI (at least the ones used for critical performance).

Then we are basically on the same side. It's just our principles that vary. Security is ensured with external tools (e.g., SQL sanitizer/checker/minimum privilege), and AWS's 4TB instance exists for a reason - graph data is hard to scale. Sometimes even passing over sharing memory is too slow for us and we rolled out copy-free allocators which directly allocate objects on that memory region.

That's what Rust does so well. Existing C++ can be wrapped or ported. New stuff should be implemented in Rust from the outset.

I have found Rust's C++ story better than most. C++ is unfortunately a hard language to interoperate with. Rust's C story is absolutely unparalleled, and an easy to way to share code between the two languages.

Strange comment, since C++'s "C story" is better than Rust's C story. (ObjC too.)

As for C++ interop, ObjC++ is the best I've seen; is there something better?

I disagree. Rust has managed to have great interoperability, but hasn't made the sacrifice of backwards compatibility that has plagued C++. C++ is also ever so slightly different, and surprising things can happen trying to compile C using a C++ compiler. Rust makes the interface explicit, which is easier to reason about.

I'm not super-familiar with Rust's C interop - it it really that good? Does it support preprocessor macros? Can it inline C functions from header files? Does it know about function attributes like pure?

If you believe that C++ "sacrificed" for backwards compatibility (and I agree), well what was that sacrifice for anyways, if not better compatibility?

It is easy in the sense that Rust can match the ABI and calling convention of C with a single annotation or keyword. No, Rust can't use C macros, because it's calling through an ABI that is connected by the linker. The preprocessor has long since run. This is also why pure and inlining are separate issues.

It is a much better interop story to use this type of FFI. As I've previously said, C++ compilers are usually not 100% compatible with their C counterparts, and will actually change the semantics of attributes and make different decisions about inlining. It is far better to use a C compiler to get machine code that matches the intended semantics, and call over a well formed ABI that to take the subset language approach.

Not to mention that using C libraries from C++ adds a different convention for memory management in the middle of what should be RAII code, which almost always leads to memory leaks. By using the Rust FFI, Rust will at least make sure you are treating unsafe code in a controlled manner.

So Rust's C interop is limited to implementing the platform calling conventions? It's nice that Rust can do that but this is NOT better, it's a significant limitation. A deliberate, defensible one, but a limitation none-the-less.

Here's a real example. If you write C, you'll probably make a system call, and then check errno. You might also write to errno, e.g. save and restore.

errno is implemented differently on every platform, but a C program can include errno.h, and use it portably. C++ can do the same thing: include errno.h and you're done.

But Rust can't read the C header, so it must literally special-case every OS [1].

errno is just one thing, but there's a long tail: different platforms have different types and size and function names and signatures, especially the ancient system-y stuff.

"Rust is better than C++ at C interop" is an absurd take. The point of Rust is to displace this stuff, not be the best at integrating with it.

1: https://github.com/rust-lang/rust/blob/7cb3ee453b829a513749e...

> errno is implemented differently on every platform,

> But Rust can't read the C header, so it must literally special-case every OS [1].

That's a bit of an odd point. You're saying that the C standard library implementation of errno is different on each platform, but abstracted in the standard library so that C programs can just import it and run a portable function.

...and then you complain that Rust does the same...

While it is nice that Rust offers such FFI, it isn't specific of Rust.

Like in any other language with similar feature, it is still a pain to manage in large APIs.

I'd say there are roughly three classes of C compatibility:

Tier 1 is C++ and ObjC, where you can use C headers nearly directly.

Tier 2 is where you have to create (and maintain) bindings, but you can map practically everything. Here is Rust, D, Nim, Go, etc. More work than tier 1 is necessary but no loss of performance (except maybe inlining and macros). There are distinctions within this class depending on how well macros etc can be mapped, but usually people map to the ABI instead of the API.

Tier 3 is where you cannot map directly and inefficient adapters must be made. This is for example Java, which does not have struct or size_t. Also nearly all scripting languages (Python, Ruby, Lua, etc).

C++ interoperating with C makes it very C-like and often removes the benefit of using C++.

In rust, you can do dynamic dispatch via trait objects over C structs. In C++, that wouldn't work, because dynamic dispatch uses inheritance and it needs to add a virtual method table pointer in the struct, meaning that you'd need to translate/copy the struct into a C+ one.

Also, rust has neat tricks. An Option type of a reference is represented as a single pointer where NULL means None. That means your plain old C struct can be interpeted as an Option type in rust without copying/translation.

Rust is magic pixie dust, whereas C++ is limited to manipulating ones and zeros, which severely curtails the available techniques.

  class cpp_class {
     c_struct *pimpl;
     cpp_class(c_struct *p) : pimpl(p) { }
     virtual void whatever();

  class derived_cpp_class : public cpp_class {
     derived_cpp_class(c_struct *p) : cpp_class(p) { }
     virtual void whatever(); // override

Even simpler with some template boilerplate code:

    // Just copy paste this
    template<typename T>
    class CContainer {
            T *pimpl;
            CContainer(T* raw_ptr): pimpl(raw_ptr) {}
            CContainer(const CContainer&) = delete;
            CContainer& operator=(const CContainer&) = delete;
            virtual const char* get_my_name() = 0;
            virtual ~CContainer() {};

    // User
    class MyCHandler : public CContainer<c_struct>
            MyCHandler(int arg) : CContainer(my_c_new(arg)) {}
            virtual const char* get_my_name() override {
                static const char[] kMyName = "MyC";
                return kMyName;
            virtual ~MyCHandler() {
            std::unique_ptr<MyCHandler> clone() {
                return std::make_unique<MyCHandler>(my_c_clone(p_impl));
Moreover, the zero-copy magic from pointer to option in Rust is not something like what you think - Rust relies on compiler magic to eliminate option type's heap allocation, and if you convert it from a pointer, it will reside on the heap.

The option type never heap allocates on its own. There’s no reason to rely on compiler magic that doesn’t exist in the first place.

That is creating an extra container object which needs to be allocated separately. Rust doesn't require that.

I'm sure that you can construct the container object at each call site so that it behaves like rust (and it would just be allocated on the stack), but it would get a little ugly. In rust, it's very seamless and "rust-like" even though you are operating on a native C struct.

Similarly, the Option type magic is pretty darn useful. I'm sure you could do that in C++ as well, but again it would end up ugly because you'd need to invent a new COption type or something.

> Rust doesn't require that.

In the general case, the hardware requires it; that's how computers work. The C library you're working with that has handed you the structure has not reserved any space for you to put anything into the C structure for it to have embellishments like type-dispatched virtual functions. The memory addresses immediately above and below the structure are off-limits (if they exist at all).

If you control the allocation and disposal of the memory, then you can easily embed the structure in a larger object. When calling the C code, it just gets a pointer to that part of the object that is the C structure it knows about. Amazingly enough, C++ can do this quite easily. It can be done not just via composition, but also by inheritance:

  class derived : public some_c_struct {
     virtual method();
Then it's just:

  extern "C" void c_function(some_c_struct *);

    derived d;  // define easily "on the stack"
    c_function(&d);  // pointer upcasts to base implicitly
All you see here is 1995 draft vintage C++.

The idea that C++ doesn't work smoothly with C structures is bizarre; C++ contains a dialect of C which is highly compatible with C90 (to the extent that large programs can be developed such that they compile as C or C++).

> In the general case, the hardware requires it; that's how computers work.

In rust, it creates the trait object right as it's calling a function that requires one. But "creates" is too strong a word -- it's just sending the virtual function table pointer along with the structure pointer and any other arguments, probably in registers. Only if you are actually storing the trait object somewhere does it matter much.

In your example, you have to construct the "derived" type each time you come from C to C++. Using an extra type is annoying. I think a better C++ example would be trying to do something closer to what rust does and use the C struct pointer most places, and construct the C++ object right before you need it. Would work, but doesn't come out quite as clean as it does in rust.

In any case, the main point is that the normal mode of C++ is to add vtable pointers to the structures to achieve dynamic dispatch, and adding stuff to the beginning of a struct does not interoperate with C code very well. The normal mode of rust is to not touch the struct layout and achieve these features through other means. I'm sure there's a way to work against the grain in C++ and make it work, but if you ask me, rust gets it a little cleaner in this particular case. Or you could argue that working against the grain is the normal mode of C++ ;-).

I think I have understood your examples:

1. In Rust, you can define a method on a trait object (a type that encloses a C struct), and then invoke it dynamically with the syntactic appearance that 'self' is the struct. C++ does not allow you to add virtual methods to C structs.

2. In Rust, given a C struct T, you can convert T* to an Option<&T> without copying.

This stuff is all gravy but I wonder about the meat. Is there a good story for consuming C headers directly? C structs are typically a hash of typedefs and preprocessor gunk; can Rust navigate that?

There’s a tool, bindgen, that can read in C headers (and limited C++ support, I believe) and produce the bindings Rust needs to understand them. It’s not perfect but IIRC it’s good enough for Firefox.

>dynamic dispatch via trait objects over C structs

What does this mean? A struct is just a block of memory with zero type information. So how does Rust do dynamic dispatch on C structs again?

I think what your parent is saying is that, since dynamic dispatch is created via a special kind of pointer in Rust (trait object), rather than embedding a vtable along with the data, Rust can dynamically dispatch to C structures where C++ can’t. Does that make any more sense? I can show some code later. I’m also not 100% sure that’s what your parent is saying.

But then it’s no longer a plain struct, rather a struct plus a pointer to the trait object.

Which is perfectly doable in C++ too.

The struct is a plain struct; just like an int is an int and a int* is a pointer, a struct is a struct and a trait object is a pointer. (It’s a double pointer: a pointer to that struct and a pointer to the vtable)

The opposite is also correct. Last week I wrapped a Rust application into C++ application.

I think it’s quite presumptive for you to tell other people what best suits their needs. I’ve done some Rust programming, but C++ far better provides what I need.

I am with you.

As managed languages dev, I seldom use C++, but when I do it is the best system language currently available regarding tooling, libraries and hardware support (NVidia now makes GPGPU optimized for C++ semantics).

For app development, there aren't many scenarios where C++ is really necessary, whereas for systems it is going to stay around for years to come.

So anything that helps improving the quality of GCC, LLVM, CUDA, UWP, Metal Shaders, DirectX, Unreal/Unity/CryEngine/Cocos/Godot is more than welcome.

For anything else, we can just follow Microsoft's security suggestions as presented at Blue Hat IL, namely C# (replace by favourite managed language), Rust and modern C++.


Or follow NVidia's security pick - SPARK, a subset of Ada [1] for their autonomous driving and automotive firmware software.

[1] https://blogs.nvidia.com/blog/2019/02/05/adacore-secure-auto...

That as well.

They also support initiatives to have Java, .NET, Julia target their GPGPUs.

And they have supported Fortran since early days as well.

Which is why OpenCL by being so focused in plain old C, with manual compilation, never got that much love from researchers.

For me the key take away on the Spark bit was on the security of software for autonomous vehicles. Java, .NET, and Julia do not have similar security and verifiability features as Spark. Although, I was playing with F/F# for this. F compiles to OCaml, F#, or C via KreMLin.

> For app development, there aren't many scenarios where C++ is really necessary,

For cross platform work, C++ is hard to beat and no languages comes close when it comes to tooling. For example, I can debug from Objc into C++ using XCode or Java to C++ in Android Studio.

Only when constraining ourselves to native development across mobile platforms, in which case I do agree.

How is gpgpu optimized for c++ by nvidia?

Since Volta, NVidia GPUs are designed taking into account C++'s semantics.



The surface area of C++ 17 is so huge it's already largely a write only language (impossible for a normal maintenance programmer to have learned and mastered all the syntaxes and idioms). I'm not looking forward to trying to maintain a C++ 20 codebase that a bunch of architecture astronaut devs went wild on.

That is a conversation with who you work with. You don’t use all the tools in the garage but are really happy they are there when you need them.

Also, a large part of modern c++ is that simplicity is not only better for people but for compilers.

> That is a conversation with who you work with. You don’t use all the tools in the garage but are really happy they are there when you need them.

When a language contains lots of ways to shoot yourself in the foot, it's just not realistic to expect that those features aren't going to be misused when potentially millions of programmers might be coding in it. Even getting agreement of how to use features within a small team can be difficult, and you're always going to have to use external libraries developed by others that don't follow your coding standards.

The language design should make it easy to do the right thing and difficult to do the wrong thing in my opinion. When the language isn't designed this way, it's always an uphill struggle where the coding community has to devise and enforce their own standards.

Often, not always, the other side of that shooting in foot is a very useful tool for another purpose. And it is unattainable to code review other libraries, put the hard stop at the API boundary. But yeah, that can force your hand in some decisions too... It's complicated

I am of the mind that any language that is useful for general purpose or at least outside of small niche areas will provide many ways to shoot yourself in the foot. C++ does have a lot of them, but it also has a lot of tools.

Friends of mine say each new job comes with a new dialect of C++.

It is rare to find practitioners that understand modern C++. Those dialects are usually old C++98 with a hodgepodge of C++11 throw in. Mostly missing the powerful functional additions and using to the old OOP stuff.

You can do lots of modern c++ with C++98...mostly with a few things like move construction/assignment thrown in.

A lot of it is using the type system to help you or better describe your intent. Putting your values invariants into it's type means we can guarantee them and not forget.

Use RAII. This is the biggest and it has been here since the beginning.

Yeah, the old polymorphic class hierarchies can be tough. Who owns this pointer I have right now? One hopes the library takes ownership, but a pointer type doesn't have that invariant or ownership. This is where unique_ptr really shines. Also, one can create a type like poly_value<Base> that can be a Base or Child of Base and then treat it like a value with copies and all if the currently held type supports it.

But yeah, lots of people are working with 20 year old code and can at best do boyscout rule to clean up what they touch. There are a lot cppcon/c++now like talks about working with these.

I'm one of those people. As for why: modern, heavily-templated C++ doesn't mesh well with dynamic linking, i.e., plugin architecture. I can send even a pointer to `std::function` over ABI boundary (COM, in-process server) as int64, and it even works (given compatible compilation -- this is checked for), but uninstantiated templates are a no-go from the start.

So anything that's to cross the ABI boundary must be wrapped in vtables with methods taking/returning primitive types.

Of course, you have to unwrap it on "the other side", and the more such wrapping/unwrapping you have to do (casting to/from void* or intptr), the higher the chance that you'll mess it up somewhere along the lane.

This naturally leads to OO design from the start and "functional techniques", if any, are buried deep in the implementation code. Though they make awful compile times and debugging experience is awful as well (ever tried to inspect a boost::fusion::map in a debugger?).

Well, you can use std:: function on an ABI boundary if your platform ABI guarantees ABI stability. If not you might need to roll your own implementation.

I feel this way about Python. The stdlib is wildly huge. I use a certain 2% of it all the time. But having skimmed the entire thing, every so often I'll have a moment of, "oh! I know the right tool for this!"

I do this with C++'s algorithms, numerics and containers. Also, getting into the practice of naming things and letting the optimizer inline really helps. Have a loop, put it into a small function and name it. The code starts to look like it is full of C++ algorithms and reads at the same abstraction levels at each method body

"write only language" ? If anything, C++ is becoming more const-sensitive as it ages.

> The surface area of C++ 17 is so huge it's already largely a write only language

You don’t necessarily have to use every C++17 feature…

That's only true when writing new code. When you have to maintain code, you need to use all the features used by any one of the people who've ever worked on it, which at some point becomes all of the features.

When reading/reviewing someone else's code you have to know the entire set of problems and constraints that may arise from their code, including syntax traps, semantic esoterica, and effects. You can't simply decide on a subset of the language that you like and declare than none of the rest of it affects the code unless you want to trust that none of the excluded set has any impact.

And, a whole lot of the new features are there to replace difficult and verbose common uses of old features.

You can't rewrite all the code you come in contact with.

A number of the new features like concepts are aimed more at library designers to help them do efficient things that make their libraries easier for callers. User code may not need to use all the power.

I don't think that C++ is write-only. Now rewrite-in-Rust-only, maybe!

I guess this is my feeling. We are learning nothing from the class-based OOP of the late 90s and early 00s.

I wrote a fairly large code base that started from an empty buffer in c++17. I don't really consider C++ an OOP language, though it can be used in that way if you wish.

Uh, let me clarify. We are not learning the lessons that we should have learned after class based OOP lead to unneccessary complexity in many cases.

More info in Herb Sutter's Winter ISO C++ standards meeting (Kona) trip report:


I was happy to see coroutines make it in but some of the concerns raised about the design seemed to have valid points. I'll be interested to read the papers mentioned when they're available to see how a consensus was reached and what the plans for the future are.

There were at least 4 different competing coroutine proposal at the last mailing. I guess the committee didn't see any hope and convergence and just opted for the more mature proposal.

After working with Go for the last few months, looking at these C++20 examples just baffles me. So many symbols, keywords, and library calls. It really makes me miss C.

I wonder how long it would take to learn C++ now.

Give Rust a try. People like to complain about the Rust learning curve, but comparative to the language it's trying to replace (C++) it's actually quite simple and the compiler/tooling will hold your hand compared to the conventions you have to memorize to use C++17 safely.

Not OP but i keep trying and the tools still have a long way to go and compile times are abysmal (even relative to C++). For example, https://github.com/rust-lang/rls-vscode/issues/237 this issue that has persisted for over a year with no signs of being fixed.

With C++, in spite of its warts, I can still be productive just because of the ecosystem around it. Rust was something I was gung-ho about, but after repeated attempts to create a medium/large project, I am still in a wait and see mode. Moreover, it's just not powerful enough. Many features are coming in, but they break the compiler and you end up needing to work off nightly regardless. I think it's a good effort, but people just recommending the Rust train nonchalantly really need to step back and check their biases.

I don't think using a vscode plugin as an example of Rust not being ready is very fair. You could always fix the issue yourself, it is open source you know.

My original point was that we shouldn't be building monolithic projects in a single language, but if there is an absolute requirement for it, Rust compile times aren't that bad. C++ also has very long compile times, it just comes with the territory. The reason Go can compile in under a second is because they purposefully keep the language specification simplistically small.

I'm not recommending Rust nonchalantly. I write rust every day professionally, and I basis my opinions on the experiences I've had with it in that environment.

It is fair because there are very few options for authoring Rust code. Regardless of who is responsible, a robust and working toolchain is sort of the minimum bar for a language, and yes it's open source, and yes I've looked at the code, but no it's not worth the time right now because better options exist. I'm super familiar with C++ compile times but again, there exist a lot of infrastructure to at least help with that or eliminate it completely (with some tricks). I'm not saying Rust will never get there, but for me, too soon too early.

I would say Rust is simple... compared to C++. On an absolute scale it really isn't that simple at all. Not like Go or Python.

For instance, take a look at the things you have to do to get perfectly reasonable-looking code to compile if Non-Lexical Lifetimes are not allowed:


Rust seems to be the biggest joke of all time. Calling itself systems language when the average hello world program is 4 MB (!) in size. Also when I tried to compile a rather low complexity example with 80 lines of code the compiler ran out of RAM. Seriously?

The reason Rust was invented was to make the monolithic firefox executable to have multiple tabs that don't block each other and can act concurrently. All of this has already been solved in a much cheaper and already available way: Unix processes.

The size of hello world has dropped considerably in the last release now that we no longer bundle jemalloc. The smallest Rust binary ever produced was 145 bytes; you can have small binaries if that’s what you need, hello world and default settings are not optimized for tiny ones.

How much RAM did the machine have?

(Oh, and you’re wrong about the Firefox thing as well.)

I started programming with C, and when I switched to C++, that really just meant using classes and overloading operators. I enjoyed that, especially the latter. Though I'm sure my code was needlessly awkward because of all the things I wasn't aware of and didn't use, in my mind, it was neater than what I could do with C, just as fast or fast enough, so win/win ^^

Anyone know how the modules problems were resolved? There were rumblings that the technical challenges around build systems and dependency resolution were blocking items (e.g. as explained here [1]).

[1] https://build2.org/article/cxx-modules-misconceptions.xhtml#...

As someone who doesn't use C++, can anyone explain in a straightforward way why a low level language is going through so many new versions?

And is there big difference between the latest version and C++ from 5 or 15 years ago?

How do you guys keep up when a huge language is continually changing?

>is there big difference between the latest version and C++ from 5 or 15 years ago?

As compared to 15 years ago, the language is almost unrecognizable. As compared to 5 years ago, it's a bit more convenient to use, but not wildly different. C++ is a relatively old language, but it didn't have an real standard until 1998, and even that version was more about documenting what was out there rather than prescriptively designing anything. Before that (and to a lesser extent since, cough-cough-Microsoft-cough), it was a mess of incompatible OS features, compiler-specific semantics, and inconsistent tooling. In the right environment, it could be the most amazing and productive language, but it was a very short trip from "this is great" to "here be dragons and assorted hellbeasts".

C++11 has been by far the biggest change. Beyond the huge impacts on the language itself, it was the beginning of the current standards cadence of new standards every 3 years. To point to a single feature, C++11 introduced smart pointers and move semantics, which are the only "modern C++" features that I use in almost every single source file.

>How do you guys keep up when a huge language is continually changing?

A new version every 3 years is not that fast, really. And the new features are totally opt-in and backwards compatible; code written decades ago will, for the most part, still compile today. That means that you can slowly adopt new features one at a time as they make sense.

>why a low level language is going through so many new versions?

First, C++ isn't really a low-level language - or at least, it doesn't have to be. It hits a really interesting sweet spot where you can express powerful things concisely, and yet predict more or less exactly how things will execute if you think it through. It doesn't run on a VM like Javascript or Python, and it doesn't have the esoteric abstractness of Haskell, but it's certainly much higher-level than C, and there are a number of layers farther down than that.

Second, the reason for the iteration mostly has to do with filling in features that everyone had to build for themselves, and which were often platform-specific. Each version has progressively expanded the standard library with things like cross-platform concurrency primitives (std::mutex, std::thread), file I/O (<filesystem>), efficient string manipulation (std::string_view), and improved templated containers (std::optional, std::variant). Adding smart pointers in C++11 was about moving best practices about memory management into the language. Adding file I/O in C++17 was about standardizing cross-OS compatibility layers. Adding modules in C++20 is about improving standard buildchains and package management. Prior to C++11, every organization had to have their own implementations of all of those things, which required a massive investment into internal "standard" libraries, styleguides, and tooling, which were totally incompatible with each other. In the modern C++ era, a lot of those things can be the same for everybody, which makes everything more efficient - easier to transfer to a new codebase, easier to start from scratch, easier to share tools and libraries.

> 1998, and even that version was more about documenting what was out there rather than prescriptively designing anything.

That was the original intent, I think, but decision to incorporate the STL caused the standard library to change massively late in the standardization process.

P.J. Plauger wrote a book on the 1995 version of the C++ library, that ended up being massively different than what the standard adopted: https://www.amazon.com/Draft-Standard-Library-P-Plauger/dp/0...

I think that teaching myself C++ in the late 90’s was a mistake. I remember going through an entire textbook, not being able to do anything with it and stopped programming for years.

I learned C++ in the late 90s and also struggled. I now barely use it at all, mostly relying on garbage collecting languages. But knowing how memory is managed, that it's a resource that can easily be abused, even in a garbage collecting environment has been a huge boon to how I architect projects.

Depends pretty much on each one.

Coming from Turbo Pascal 6.0, it provided me a world with access to C based tools, without having to endure typical C unsafe code.

Absolutely. For me it was something I was doing for fun when I was about 13 and was my second intro to programming after qbasic.

C++ is not low level in the sense that C is low level, one of its primary goals is to support "zero overhead abstractions" meaning offering higher level language features without compromising on performance as most high level languages do.

And yes, there is a big difference between C++ 03 and C++17 and an even bigger difference from C++20 but C++ has done a good job of maintaining backwards compatibility so almost all C++98 code will still compile with a C++20 compiler.

How does anyone keep up with the changes in technology over the last 20 years? At least with C++ much of your old knowledge remains relevant, more so than a Visual Basic programmer who is now using JavaScript with the latest UI framework flavor of the month.

I guess that's a good point. The js bazaar is far worse than this. It just feels worse because the C++ revisions change the standard library and the actual language, which feels like there's more to track while the js crap are just frameworks, but at the end of the day, it's still new stuff to keep abreast of.

I'm just happy I actually make/simulate things for a living than worrying about learning new tools I didn't ask anyone for.

I’m far from the top expert on the history of C++, but here’s my understanding:

C++ aims to be the high level language which leaves no room for another high level language to be even more optimized below it - “pay for what you use”.

Having its roots in C, while having high level abstractions, C++ also has a lot of low level functionality which is hard to use correctly and safely unless you have a lot of experience (e.g. raw pointers). This low level functionality is sometimes needed, but definitely not always, and the recent iterations of the language over the last decade try to both add new abstractions which make the language easier to use, and deprecate the most tricky parts which are not really needed anymore and which have better, modern alternatives (this being limited by the need to maintain sane backwards compatibility) - all the while maintaining C++’s design goal described above.

An example of an addition is unique_ptr and shared_ptr added in C++11 and which make managing ownership and correct lifetime of allocated objects much easier.

An example of a deprecation is eliminating gotcha uses of the ‘volatile’ keyword, and thus simplify the language. (I believe this one is still undergoing approval.)

There is a big difference between modern and “classic” C++, but it’s mostly a good difference - C++ is much easier to start using and to teach than it was 15 years. It still has a way to go, and the standardization committee makes a lot of effort in that direction.

As for keeping up - it’s definitely a lot of work. Learning C++ isn’t something you start and finish - it’s more like culture. You spend some part of your life studying and enjoying it, and there’s always something new.

We can hope for a future in which learning C++ is something you just start and finish, but we’re not there yet (if ever). I’m not sure if that’s a good or a bad thing :)

Applications programmer: In a low-level language like C++...

Chip designer: In a high-level language like C++...

It's called High Level Synthesis for a reason :)

from my perspective, the biggest changes are:

- auto var = eval()

- add_observer([this](int value) { this->log(value); }); // lambda expression

- vec2 = std::move(vec1); // everything in C++ by default is value-type, so this saves a copy

- for (auto&& [k, v] : treemap)

- auto ref_ptr = std::make_shared<SomeValue>(42); // ref counting

- auto unique_ptr = std::make_unique<SomeValue>(42); // non-sharing, uniquely owning ptr. can only be moved

- module foo; import "some-header.h"; export something_from_header; export void your_own_func();

In reality these are 90% features you need to migrate from C++98 to latest version. Don't be terrified by thousands pages of paper, just use them as a reference when you need to.

C++ isn't really low level. The changes are usually in libraries, not in the language itself. C++20 is by far the largest change language-wise since C++11.

To answer your question, one of the motivations behind the (ongoing) design of C++ is to factor out and codify common C/C++ coding idioms. Almost every feature, library or language, can be traced back to this motive. So most "new" features are less so "new" and more so "that thing you're doing informally? It's formalized now." E.g. concepts codify how template parameter requirements have traditionally been stated.

Why is simple, it's in pursuit of improving the language, just like anything else. And while the bloat and increasing effort to keep up with new developments are obvious downsides, C++ has definitely benefited from the fact that it is a moving target overall. C++11 move semantics for example can help eliminate unnecessary copies and some class of memory errors (when dealing with non-copyable things)

C++ and C have both been around for ages. When they first arrived, most (all?) computers were single threaded and therefore neither of them put threading utilities in the standard library and in updates to both languages, now C and C++ both have libraries for dealing with multiple threads (and I believe an improved memory model that takes threading into account.)

Almost all programming languages evolve over time. Programming changes, computers change, and people gain experience and figure out what the pain points are in a given language. Naturally, we take what we learn and apply it to future language versions.

> As someone who doesn't use C++, can anyone explain in a straightforward way why a low level language is going through so many new versions?

Every major language evolves over time. Java 8, ES5 Javascript, Python 3 were all pretty major iterations of each language. Similarly, C++11 went as far as Java 8 in the number of things that it got up to "modern" standards of convenience and power. The goal seems to be towards offering some of the conveniences of heavier or more high-level languages, but with the power of a low-level language.

> And is there big difference between the latest version and C++ from 5 or 15 years ago?

From 5 years ago, probably not. 15 years ago, C++98 was likely to be the flavor of C++ being used. C++11 was still in early draft mode, and was called C++0x. Today, C++ can look like a dynamically typed language, e.g. using "auto" type inference for variables, lambdas, and range based loops. If you follow something like the Google C++ Style Guide, you'll never see a raw pointer (C++11 introduces reference counted smart-pointers).

> How do you guys keep up when a huge language is continually changing?

Major language iterations are generally pretty uncommon. Beyond that, following a good style guide that is maintained by other people.

It's not low-level, it's a perfectly modern high-level language.

But as with anything C++, you and your team need to decide on a per-project basis on how far down the rabbit hole you are willing to go. There is a world of difference between C with classes and metaprogramming parser generators. And to the standards credit, a lot of the new stuff can be used individually.

> perfectly modern

Its getting modern, but its no where near perfectly. There are three different ways to allocate heap data and at least one dates back to the 70s. There are at least five ways to initialize a variable last I checked. The language is ludicrously complex to parse for both a human and a computer because its been iterated on for so long.

Because the design space C++ occupies is so complex and lacking almost any feature of it kills competition Rust is about the only real competitor in the space of "modern language that does absolutely everything" besides peers like OCaml that often have too arcane a syntax for the C family linguists to get in to. But compared to C++, Rust is way more cohesive just because it started one decade ago rather than four and has much less baggage so far. And with its editions system will likely never have permanent unmitigated baggage to its benefit.

You cannot count malloc as a way to allocate heap memory as a downside. It's for legacy reasons, and can't change due to the c interoperability. In the recent Go best practices story, the author said there are at least 6 ways to declare a variable in Go. That's a language with no backwards compatible of any other language.

Yes you absolutely can, because the second you are writing C++ code with anyone but yourself you will eventually find, and have to rewrite, something someone wrote using malloc in C++.

I've found code where developers casted perfectly useful data structures to void* for no reason to then index into it by memory offset to access its fields, I've seen well architected concurrent classes destroyed by consumers "friending" it and then directly addressing internal private data meant to be guarded via accessor functions, and more.

C++ gives you infinite ways to shoot yourself in the foot, and even if you are the savant genius god programmer who manages to navigate its neck deep swamp of complexity your coworkers or even worse random strangers you collaborate with on the Internet absolutely won't.

What’s the best source for getting into C++ these days? I ask this as someone who already knows a good bit of C. I’m finding it difficult to know where to start, especially after reading about the proposed changes in C++20.

Start with C++11. Learn how the STL works, and how to replace common C idioms with it. Learn how to write code that never directly calls `new` or `delete`. Learn move semantics, RAII, and the rule of 3/5 well. Dip your toes into templates.

C++14 and 17 mostly just add some library features and clean up rough edges. You'll know you know C++11 well when you start wanting those features.

Only then look at C++20. Bonus: there might actually be compiler support for it by then. (Caveat: You might want to learn modules sooner; they will probably be the most visible day-to-day change.)

I think that's the first edition, FYI second edition exists (dunno if it is in Safari or not).

Personally, I love C++ Templates: The Complete Guide (2nd Edition) since it's such a thorough book on templates (which are pretty fundamental) and also goes into depth into other general C++ topics. It also discusses Concepts which are to included in C++20.

Why did people downvote this?

Thanks to all for the sources.

If you absolutely must use C++, I like to read Bjarne's A tour of C++ which gives you a very high level overview of what has changed, and then deep dive with Google if there's something worth digging into. The reality of a language as large as C++ is that each project will only use a small subset of the language and libraries, and the nature of object oriented programming leads to vastly different architectures for similar problems.

If you can, I would learn Rust instead. Rust is a much smaller language that purposely limits the way you use the language to encourage particularly safe practices, and tends to favor designs that are hierarchical instead of allowing hairball object graphs. Rust also guides you at the compiler level in ways that C++ programmers must learn to do by heart, or simply leak memory and create nasty patterns in their code.

C++ includes low-level and high level features.

New language features. Lambdas, std::move, stuff like that. It's notable that new languages like Go and Rust measure their release cycles in months or weeks.

It's a new version every 3 years, much better than the glacial pace between '98 and '11 but still doesn't seem that frequent.

I consider myself an expert on C and I always used to put C/C++ on my resume. But it occurs to my that C and C++ aren’t really similar anymore and that I don’t really know C++ at this point.

I’ve always despised C++ but that’s when it was pretty similar to C. It seems to have gotten exponentially more complicated but also it seems like it’s pretty expressive at this point: potentially supporting Rust-like safety and FP-like design.

Maybe someone who’s an expert can tell me how different C++20 is from C?

Very very different. Don't put C/C++ on your CV if you don't know C++. You'll just annoy employers who are looking for C++ developers.

The fact that C++ versioning uses two digits for the year is the most 1980s thing about it.

If C++ continues to release a new version every 3 years, 2098 will be C++98 again. So either the language designers don't think C++ will survive to 2098 or they will need to change their versioning.

We can then skip that 98 and go straight to C++101.

I think you meant C++XP.

So most notable new language features are modules, coroutines, concepts and contracts. That's huge!

I remember vaguely that there were some issues with the initial module proposal and multiple (concurrent :) coroutine designs.

Does anyone know what the problems were and how they were resolved?

"TL;DR: C++20 may well be as big a release as C++11."

I feel like with the addition of modules this may be a release of the language spec that is going to need at least a decade to settle in. I hope the next few changes are more incremental.

I am very excited for modules. Hell, maybe I'll even see them used some day in the chromium code base I work in.

This is super exciting! Big congratulations! I only wish that C++20 gets a wide adoption as quickly as possible once its released.

Very disappointed that coroutines will be included in their present form. Did the committee just want to push something out?

"Coroutines (Gor Nishanov) was adopted for C++20. A number of other authors that have their own coroutine proposals have also done great work to give feedback to this design and inform the committee in the process of showing their own designs, and the adopted coroutines design is expected to adopt some of their additional programming capabilities in the future as well as we continue to build on it after C++20. At this meeting, we had many detailed design discussions, including notably these papers that will be available in a few weeks when the post-meeting mailing is posted: a unified consensus report from all coroutines proposers describing the strengths and weaknesses of all their proposals, and a consensus report from implementers from all major C++ compilers about the tradeoffs and feasibility of the basic facilities required by the various coroutines proposals. Both of these papers are very educational and highly recommended."


The papers aren't available yet but presumably will explain the reasoning behind accepting them in their current form.

If you want your opinion heard, you have to participate. The problem though, is that certain people have pretty much a full-time job to push stuff through the committee. So they invest way more time than you can ever afford. And in the end they get their shit into the Standard.

A good solution today is better than a potentially better solution tomorrow.

it really depends. if the accepted coroutines hides memory allocations willy-nilly, it means that there is a whole body of code that will never ever use coroutines in their code. While waiting for non-magic-allocating coroutines would have made those better in five years.

C++ compiler authors must practically be gods to translate this stuff. There are so many parsing problems — quantum mechanics is child's play.

Query: Would the compiler writers have to write compilers from scratch for C++20 or the features can be added without overwriting the entire system?

I am aware that new features means a few parts needs be rectified. But does the whole need to change?

And I agree for C++ compiler writers to be gods. Looking at the source code for GCC/clang makes me doubt my programming skills. Same goes for the linux kernel source code.

It's the parser (context dependent) that's hard. The rest is a regular compiler that you can learn to write out of a textbook.

And the parser is hard in pain-in-the-ass terms, not CS theory terms.


If the parser is hard to write, that means the syntax is conditional upon previous statements, which I've not seen in the spec. As long as the syntax is Lexical, no real issues parsing. How does the context throw curves?

The example I know is the following line:

    foo * bar;
This is either an expression statement computing the product of `foo` and `bar` or the declaration of a variable `bar` as a `foo` pointer. To solve this ambiguity, you need context: if `foo` is a type available in the current scope then it's a variable declaration, otherwise an expression.

I did say context dependent, which is the opposite of context free.

Yes. May need to back track due to context sensitivity. A quote from the source comments:

    Some C++ constructs require arbitrary look ahead to disambiguate.  
    For example, it is impossible, in the general case, to tell whether 
    a statement is an expression or declaration without scanning the 
    entire statement.
I am sure there must be some devilish source code examples that require an exponential search for the correct parse. Also, has anyone constructed a modern C++ parser in bison which can backtrack?

Gcc 3.3 and before had one, but they decided to write one by hand going forward. I don't think it's reasonable to do so without keeping context.

Making sure that features are implementable on existing compilers (without major changes to the AST or breaking the ABI for example), is a significant concern for the committee, and often features get dropped for this reason.

I wonder if this really makes sense. In the long run, C++ will be even more massive than it is now. A hundred ways to do sth. Meanwhile, new, lean, langs jump up all the time: crystal, nim, swift, rust. I wonder how long the momentum can carry C++

C++ will never die because it is the only language that really owns the "zero cost abstraction" business like no other language.

Modules are in! Great news.

Yay, even more features and complexity. Will they ever stop?

It already takes years to really master C++.

Soon you will be unable to read and understand C++ code without googling every second line.

Since the language is shifting and changing so fast, how do teams pick style guides and coding standards for multi-year projects?

In my experience you stick with a certain version, even as new stuff comes up. My current work is in C++14 and will not move to newer versions.

What are the reasons for not moving? We are currently on C++ 17 and we need to support Windows, macOS,iOS and android.

One reason: consistency. When you introduce new features, idioms shift and maintaining code that uses multiple idioms for similar purposes is simply less fun. In the worst case you make changes harder because you now have to make your changes compatible with more modes of usage.

Dependency (sometimes even binary dependency) with a library that doesn't support newer standard can be an issue. Not all compilers guarantee interoperability, and even with those that do there can be bugs.

An IT department that controls compiler upgrades, in my case :/. We're still stuck on not-quite-C++11 because we're still stuck on g++-4.3

No plans to move ever, or just sticking to it for a while? The latter is perfectly reasonable, the former seems a great way to end up with technical debt: at some point compilers might (although it is admittedly very unlikely at least for gcc) drop support for older standards, 3rd party libraries as well(this is much more likely).

Opened the coroutines draft just now. I like the humour in the note at the bottom of Page 1!

C++ is no longer C++. It might have been that at some point, but nowadays, it's more like C plus e'th root of pi.

Even the name "C++” is inane.

I have a love / hate relationship with C++ — without the love.


I've been fighting c++ recently with qt. Something as simple as iterating over widgets on a page isn't possible without casting to every kind of child widget and checking it. I get why, it's because if static typing, but I wonder if it'd ever be possible to 'type hint' and make it possible to implicitly downcast. For example qt will let you iterate over QWidget * type, but you gotta cast it to the real type of QLineEdit and such to get the widgets values.

Of course the 'c++' way is doing event listeners and recording data when it's changed.

A tangent on ways to do it with Qt, because I don't like seeing Qt presented in a bad light due to unfamiliarity.

With Qt you can chain QObject::metaType(), QMetaObject::className(), int QMetaType::type(const char *typeName) to get an integer you can use to switch () on type. It is, I believe, possible but obscure on purpose because it is not meant to be used willy-nilly. You can also access Q_PROPERTYs by name using QObject::property(), and Qt's own classes have the same names for the same properties where it makes sense - such as text() in QLineEdit and QAbstractSpinBox etc.

It depends on your requirements if using that approach actually makes sense. I haven't seen it used yet... If you want to serialize GUI state, a more intrusive design where, say, you keep some helper data on the side to allow "dumber" code in the actual de/serialization might be better. Or just wire up the widgets to a model (QAbstractItemModel and / or QObjects exposing Q_PROPERTYs - that's how you do it with QML if you do it right) and use the model as the main source of truth and, of course, for serialization.

The "high level debugger for Qt" GammaRay is a nice showcase of what you can do with Qt's introspection capabilities. Qt had ~3 extension points added for GammaRay to observe object lifecycles, most everything else is based on pre-existing functionality.

How do you use QMetaType::type without having to do the same as I said, which is explicit casting? afaik there's no way to get the 'real' type that you can shove into qobject_cast or dynamic_cast to turn QWidget into a QLineEdit. The only way is to write in the type and try each cast, you can't store types in some map and go 'get' the type at runtime to have casted.

Yes you will have to cast eventually, and no you won't have to try casts.

You'd have to point out specific code for that. Afaik that's not possible in c++ at runtime. Compiler has to know the type even in dynamic casting.

I actually think I've given you all the info you need. You switch on type ID and cast to the then known type.

That is the point of virtual base classes. Not using them in this situation is simply fetishism on the part of your library.

curious, I've written hundred of thousands LOC of Qt code and never had I encountered a case where I would need to iterate over all child widgets, it's generally only a well known subset. what do you need to do ?

Its a config menu that's in part generated from a config file, and part pre-made in a .UI file. I don't control the config, and it can add or remove options. I premake widgets for known options, but still generate widgets for any options that aren't covered by pre made ones.

To keep things far simpler, the .UI file is meant to be the one that defines the premade widgets, and have properties on them. How UI seems to be usually done in QT is uic generates a header and the object names become members that can be used directly. Then all those members are used directly, and become hard coded.

In this circumstance though, I don't want to hard code in every widget, that way if new options are added, they can be updated in the .ui file and not require recompilation and more hard coding of widgets.

Problem is, you can't do qwidget->find, get the head widget, then get a list of the widgets actual child type. What I've read and have asked others on, you gotta qobject_cast until it sticks. C++ can't get types at runtime and then cast them, it has to know what to do at compile time. Hence having to write out every type and cast it.

I have mixed feelings.

On the one hand, C++ has created generations of Visual Studio GUI addicts. Imagine how much further software would have advanced by now if the average person (a Windows user on a crappy laptop) had good, free tools to make web and mobile applications. Two things VS and C++ are just really bad for, if you inhabit reality.

And all those stubborn people who are going to reply insisting otherwise, how much better would the world be without them? Just kidding guys, hang in there.

On the other hand, less competition! Long live C++!

> C++ has created generations of Visual Studio GUI addicts

C++ has been used outside of Windows since basically the inception of C++.

C++ predates Windows.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact