Hacker News new | past | comments | ask | show | jobs | submit login
Why the C++ standard ships every three years (herbsutter.com)
229 points by frostmatthew 3 months ago | hide | past | web | favorite | 220 comments



The 1998-2011 gap has made a lot of damage to c++ in term of "market share". I think it's necessary to avoid something similar happen again.


I kind of see it differently, that the 98 and 03 standards had a lot of mileage and as necessary as the more recent changes may have been, people felt the old standard pretty usable for a long time.

C++11 was a really big shakeup. In contrast, C11 isn't a major difference over C99. Sometimes I read about the rapidly evolving modern C++ and I wonder if they are moving too fast, as large chunks of the community have not even caught up with what is already there.


I think it's worth noting that up until 2010 C++11 was known as C++0x. Everyone knew it was coming. You could get a good idea of what it was going to contain by looking at things like Boost. C++03 wasn't some abnormally stable version, it just took a longer than expected time to write the standard for C++11.


Yes I remember. Also, by the time c++0x seemed "right around the corner" for several years, a lot of the library features like smart pointers were already common practice, the standard just, well, standardized them.

It was possible and common to have pretty "modern" styles in c++03, you'd just have to do without lamdbas etc. and be using less of the 'std' namespace.


Strong disagree.

The nonsense needed for "variadic" templates in C++03 isn't "just do without lambdas". You basically have to walk on eggshells to get the equivalent of unique_ptr. Not having to type std::vector<SomeTypeName, WhateverBuffer>::const_iterator changes the way you write code, too.

The stronger guarantees on copy elision lets you skip return by reference nonsense more aggressively.


> Sometimes I read about the rapidly evolving modern C++ and I wonder if they are moving too fast

Sure there are a lot of small accumulating changes, but they can easily be caught up on by reading the documentation when you actually need those features.

On the other hand, I've been waiting for modules, concepts and networking literally since 2012 - and I only started using C++ in 2011, with around 2 years of experience in C. I don't think anyone is moving too fast in that direction.

Edit: I still remember the graphic from the committee/Herb, which showed 2014 for the networking TS, and 2017 for concepts/modules. The "networking TS" then ended up being like one .pdf file with functions for LE/BE conversion, and well obviously concepts and modules aren't in C++17. (Neither is any actual networking.) And while concepts look to be on a fairly good path for C++20, I'm going to be very surprised (and happy) if they'd manage to get modules in, too.


This was also the timeframe in which boost::shared_ptr, boost::filesystem, boost::variant, and boost::thread have matured to the point where they could be imported into the standard, and when people discovered all these dark corners like SFINAE. The C++ language was frozen, but the C++ ecosystem never really stopped moving.

I actually think that the lack of a popular cross-platform package manager is much more harmful. It's amazing what GitHub and CocoaPods did to the niche Objective-C community in such a short time, while C++ still isn't a language where people celebrate cool open source libraries.


Maybe this is my old skool C++ bias, but every time I’ve inherited a project using CocoaPods I’ve found the code to be of poor quality and the Pods to be either corporate libraries (third-party analytics, crash logs, etc.) or wrapper code of questionable quality that hasn’t really provided any benefits over a couple of hours of writing some simpler code yourself.

And you yourself mentioned Boost, which is kind a code ecosystem of its own, all open source. But I think C++ has problems with libraries because it’s such a pain to add a library to C++. If the library is C++, you can’t ship a binary and header file like with a C library because the ABI needs to match, so you have to compile the thing. And then people like Boost do template wizardry that consumes compile time to produce magic, but some developers value the compilation speed over magic, so they don’t use anything heavily templated. But if it isn’t templated, it likely could be written in C if it isn’t a big UI framework, and be available for everyone. So, no C++ library ecosystem.


Boost is awful even if you don't instantiate any templates. Just adding #include <boost/filesystem.hpp> will add 1 second to compile that compilation unit and we haven't even done anything with it.


Swift had the same ABI issues as C++, but when most packages ship as source code, it's not that big of an problem. This is precisely the thing that a standardized package manager can establish.

I think it's a bit of a catch-22. C++ projects are so rare and monolithic that reinventing the wheel every now and then. But maybe more (smaller) projects would happen in C++ if there were polished libraries for reasonably common requirements like sending a SOAP request.


> Swift had the same ABI issues as C++, but when most packages ship as source code, it's not that big of an problem

... and when there's essentially a single build system for Swift code. While for C++ there are several.


Having a single master, instead of having to please every compiler vendor, helps.


Over-edited my comment, should have been:

> C++ projects are so rare and monolithic that reinventing the wheel every now and then...

...does not hurt much.


I would say that the primary driver for growing the niche Objective-C community was iOS, not CocoaPods. And Objective-C is dying again with the introduction of Swift.

Also, I see cool C++ open-source projects being released fairly frequently.


My graduation project was to port a the particle visualisation framework of my supervisor from Objective-C/Renderman into Visual C++/Window/OpenGL, because he saw no future in it, little did he know what would happen 10 years later.


Alternatively, releasing new features every 3 years to an already-bloated language could cause even worse damage.


This suggests you aren't aware of the many huge benefits that have come from introducing modern features that other languages have. C++ since 2011 is quite a different language to what it was before, and this is hardly a bad thing. So many of the challenges of writing C++ were significantly simplified with the introduction of lambdas, smart pointers, and threading primitives.


The issue isn't that it gets new features it's that they're half-assed because you can't impose new restrictions on old code. The power of these features in modern languages comes from their ability to protect you from yourself. That has more to do with what you can't do than what you can. Bolting more legs onto C++ doesn't protect you from anything, it just increases the surface area of the language.

They haven't introduced the features other language have, they've introduced poor rip-offs that fail to work as one might expect having worked in any of the other languages they're derived from.

The problem is it's not different than it used to be, the cracked foundations are exactly the same as they've always been always been but there's a bigger and more complicated house on them. C++ is starting to feel like the Winchester mystery house.


The C++ philosophy is not about protecting you from yourself. Its about allowing you to express your idea in as low or high level terms as you require. This lets you write a highly optimized tight loop and then abstractly describe less performance sensitive parts.

In the past it was thought that people would use safer high level languages and then drop down to C for performance. That vision just doesn’t seem to work out in practice - except at the cross process level.

The trade off to this is language complexity. If you want a simple language C++ isn’t for you - and that’s OK.


C++ for me was and with every newer standard is really a meta language. It is used by many big projects to build their own "language". Such a language is almost completely ad-hoc and does not enjoy compiler checks, because it lives in project guidelines. It may enjoy some checks thanks to whole template machinery. But error messages barf about templates not a real intention behind them.

Every big project is almost entirely different. They use different sets of features - some overlap more, some less. Many developers like to use C libraries, because they are easy to wrap in their version of C++. When you shop for libraries you often have to think if their version of C++ will work with yours. There is some consensus around STL and Boost, so at least that is relatively straightforward.


There seems to be a more modern trend of wanting all code to look the same. I've worked in large projects that have existed over many eras (including as far back as K&R C). The "Refactor when it becomes a problem" seems to work amazingly well. Global refactors and project wide styles seem to always fail miserably - inevitably a new era comes before the previous standardization effort completes.

E.g. in the C->C++ transition most malloc's were left alone. If you wanted to add a ctor/dtor then you would go refactor them as necessary. It also encouraged you to encapsulate your work moreso than you would have otherwise.


"Global refactors" work well with type system support, and not at all otherwise - and more modern languages do tend to come with better such support. Even for C++ itself, LLVM has a few bespoke tools to help you refactor old code as suggested by the semi-official C++ Core Guidelines - of course not all possible "refactors" are equally supported!


A project where different modules are written in different C++ dialects is a real pain when you have to refactor code across modules or even just write or review code in different modules. And the finer the granularity at which you allow dialects to be mixed (within a module, within a file, within a function), the more horrible it gets. Every time you make a change you need to think "what dialect am I supposed to be using here?" The mental surface area required to understand code grows and grows.

But it is also true that forcing all the code in a project to use a single dialect is expensive. Developers need to decide what that dialect is --- not just once, but constantly as C++ evolves. ("Is it time to use Ranges yet? Concepts?" etc.) You need to do expensive refactors every time you update the dialect. You need to enforce the dialect at code review time (harder as it changes). Every time you import third-party code into the project you need to update its dialect.

A constantly evolving language that regularly deprecates commonly-used features in favour of "better" alternatives, like C++ does, is problematic. The faster pace of C++ evolution is making this problem worse.


Which features are painful?

Aside from exceptions (Can't use them safely if RAII is not universal) and shared pointers, most new language features are pretty localized in effect. E.g. using a lambda in a function originally written in C++98 does not make the existing code less safe. Only your sense of aesthetics would force you to update the rest of it.


> E.g. using a lambda in a function originally written in C++98 does not make the existing code less safe.

You may not be able to if the lambda captures state and you need to convert it to a function pointer.


> That vision just doesn’t seem to work out in practice

Well, about half of the Python ecosystem seems to disagree.


Python is likely the best example of this working, however even in Python the boundaries between high and low performance sections are very formal. It's a lot simpler to go in and optimize a piece of code the profiler has pointed out with C++.

I really don't understand the animosity non-C++ developers have towards the language. You can still use Python, Rust, etc if you want. Nobody wants to force you to use C++.


I am a C++ developer by career. Me and many other C++ developers I know do not really like the language. It is what it is. It has useful parts, but complexity is a weight.


How do you end up being a C++ developer that hates the language? I’m a C++ developer as well, and if I ever decided that’s I didn’t like what I was doing every day I’d learn a different language and try to get hired for that.


I’ve been an iOS engineer for 7 years and I’m not exactly a fan of Objective-C. I’m good at it though, I know where all the sore spots are and part of my value is I can stop people from hurting themselves with the language. One day, inshallah, our team can eventually move on to swift (thank you ABI compatibility).

Until then I work in it because my goal (and my job) is to deliver amazing products and experiences to my customers and if I had to do that in COBOL, I’d do that too. My job is not about the language for me, it’s about what I’m doing with it.


Non-C++ developers are usually awed by it. The animosity comes from people with experience in C++ that decided not to use it.

Pre-11 C++ was a bad language. The gains of leaving it to something like Java (that isn't even good by today's standards) were huge. The new standards are making it possible to create better code on the language, but it is still far from a modern language and the added features can not remove complexity, they can just add to it.


> the added features can not remove complexity, they can just add to it

There are countless examples of the so-called modern C++ features reducing complexity in code.

For example, it is now considered a code smell to ever write "new" or "delete" in your code. Imagine that, never using these words to manually allocate and delete memory in C++! But it's true, unique_ptr in particular makes so much simpler and safer.

Type inference with auto doesn't just save typing, it improves performance in many cases and reduces a lot of complexity, while also avoiding uninitialized variables.

These are just a few of at least a dozen examples that come to mind about reducing code complexity with more modern C++ features.


New features can reduce the complexity of code, but they cannot reduce the complexity of the language.

C++ programmers still need to know what "new" and "delete" do, so they can work with older code that uses them. They also need to learn what "auto" does, so they can work with newer code that uses it. (The behavior of "auto" isn't trivial; how many C++ programmers understand the difference between "auto v = <expr>;" and "decltype(<expr>) v = <expr>;"?)


Why would you ever use or see the latter? Fabricated examples aren't helpful.


You wouldn't...

...instead you'd use "decltype(auto) v = <expr>", which is equivalent. And people certainly use decltype(auto), or else it wouldn't have been added to the language, 3 years after regular auto.


Not sure why this was downvoted, decltype was introduced with C++11 alongside of auto for declarations: the example given isn't accurate of reality. A better one might be a function returning auto vs returning decltype(some expression).


Which specific example is choosen doesn't seem all that relevant for the point of "there's these two similar but not identical things people have to/might not understand the difference of"


Picking an example that never occurs is not relevant, because nobody needs to learn the thing that never comes up in practice.


But as you show with your other example, the difference between auto and decltype does come up in practice, even if the example is not a typical one.


Sure: that's why I provided one that reflected the real world more closely.


re-reading: My first comment was addressing the "why might this have been downvoted part", I didn't make that clear enough.


I hate unique_ptr and shared_ptr, using them is so glaringly inconsistent. Half the time you can’t construct a unique_ptr when it’s easy, because you don’t have the information until just enough later (like after a calculation or two in the body of your constructor) that you can’t use the easy way. So how do you transfer a unique_ptr? Well, the obvious ways don’t compile, and you eventually learn you have to move it, which involves using std::move(), but it always seems to go in the place I don’t expect it. And then how do you use a unique_ptr outside of your class? You can’t pass by value, for obvious reasons. Pass by reference, maybe? I think that’s frowned upon, I think you’re supposed to pass a naked pointer, and Modern C++ code is supposed to understand that naked pointers mean I’m loaning you this pointer. But I thought the point of Modern C++ was to get rid of pointers? Anyway, shared_ptr works completely the opposite way. You are supposed to pass the pointer by value. Now you can argue that of course it’s supposed to be that way, and the arguments are all cogent and when you spend half a day figuring it all out it makes sense. Until tomorrow when you forget it all and have to actually use the things. Plus I hate underscores with a passion, it hurts to type them. Modern C++ also seems to like making a line of code longer and longer, because you can’t just make a unique_ptr or shared_ptr with a constructor, no, you need make_unique<typename>(...), and the arguments are magically the same as the type’s constructor, even though it’s obviously a different function. Yuck! At least new and delete are pretty obvious how to use, and only have one special case to worry about ([]). Granted, the *_ptr are better, but I hate using them and wish I could use something that was shorter and easier to remember.


> I hate unique_ptr and shared_ptr, using them is so glaringly inconsistent.'

They're glaringly inconsistent because they behave differently with regards to ownership, and the fact that you can't copy them or pass them around in certain cases is because they're designed to stop you from doing this as it would undermine the reason you're using the class.


Yes, but that fact is I can't ever remember how to use them properly (unique_ptr in particular). It's not often that you need to pass a unique_ptr to a function outside your class so that it can do something with it (without transferring ownership), so I can never remember how I'm supposed to do it. But it seems like if I want to hand a pointer to a short-lived function, doing it ought to be pretty consistent, whether I'm passing an old-skool naked pointer, a unique_ptr, or a shared_ptr, and it's not.

Like I said, if you look at all the logic, there's a good reason why everything is the way it is. The problem is I can't use the things without looking them up. Usability of my language is a big deal for me, which is why I hate unique_ptr.


FYI, you can use a constructor with a shared/unique ptr.

auto p = unique_ptr<Foo>(new Foo());

Totally works, and may be easier to remember if you dont like the "magic" parameter forwarding.


Careful with that if it's not the only thing happening in the statement. It might be a while between the 'new' and the unique_ptr ctor. If an exception happens in between the object will leak.

e.g. foo(unique_ptr<X>(new X), unique_ptr<Y>(new Y)) is a leak waiting to happen.

https://stackoverflow.com/questions/37514509/advantages-of-u...


Small note that this is fixed in C++17: https://stackoverflow.com/questions/38501587/what-are-the-ev...


Very good point! I was only thinking about the specific case of a statement that just makes an object and shoves it into a unique_ptr (e.g. does what make_unique does in the linked Herb Sutter article), in response to the comment about not being able too use a normal constructor. You are totally right that this isn't exception safe in cases like you mentioned.


The vast majority of the time you just want to pass a reference to the pointed-to object (for both unique_ptr and shared_ptr). Once you've decided what you're actually going to do with the object, that usually determines how you're going to want to pass it.


> But I thought the point of Modern C++ was to get rid of pointers?

no it's not, it never has been. The point of modern C++ is to get rid of owning pointers.


How does auto improve performance? Doesn't it just expand to whatever type a human would have manually had to put there in the first place?


It allows automatic generic specialization in some cases by the compiler, as you don't have to do template-fu or use a nonspecific type or manually write multiple specializations.

Plus it usually makes the code cleaner by focusing on structure not types. As any construct, it can of course be abused.


I disagree that pre-11 C++ was bad. It would be bad now to make a new language that looked like pre-11 C++ but there is a reason it was so popular.


Its a bit like IE6. A big upgrade over C but the next update took so long that it was hated at the end of its life. Which coincidentally is exactly why C++ now does timed releases.


I worked full-time in C++ from 2012 to 2017, and I don't like the language. I also did feel like I had to use it: I was a developer on an Apache/Nginx webserver plugin, and since those are written in C our choices were (1) C, (2) C++, or (3) a C shim that communicates with some other language. None of these options were ideal, but of them (2) was our best one.


> they've introduced poor rip-offs that fail to work

This sounds like hyperbole to me. I've worked in other functional languages professionally for many years, but when I write C++ with consistent use of std::function and lambdas, I get many of the same benefits and a very enjoyable workflow. Is it the same as Haskell or Clojure? No, because they are totally different languages. But within the C++ world, they offer a great productivity benefit that I don't think satisifies the definition of "rip off".


Not all of them, to be sure, lambdas and in particular their capture list syntax is solid. By no means is that universal though, for instance, move semantics.

std::move doesn't move anything, it casts an object as a movable reference (equivalent to static_cast<T&&>). The compiler doesn't preclude you from accessing the old object (or say anything at all) and there's no guarantee it actually did get moved, it's just a hint. That's not real move semantics, it's a shoddy knock-off.

You also have to manage a new constructor, new reference type and the opt-in because the default remains not using move semantics. Worse yet, there's varying agreement on what you can or should do with the source of a moved value, the STL containers will all continue to let you use the old value, it's just empty now. That's not standard behavior because there is none, and it's de facto now because it's in the STL. What a nightmare. [1]

std::variant is supposed to be associated values on enums, but of course, it doesn't do that either. You don't match, you create a new struct with a bunch of operator() methods on it [or overload a lambda?!] and throw it at std::visit. You can only have one variant of each type. Then it throws exceptions if you start mucking about in ways you shouldn't. There's no context. It's dreadful. [2]

[1] http://yacoder.guru/blog/2015/03/14/cpp-curiosities-std-move...

[2] https://www.bfilipek.com/2018/06/variant.html


I think the way you are thinking about move is all wrong.

In any kind of correct code, the difference between a move and a copy is only performance. If a copy were to happen where a move was requested then the code is just as correct, so I find it strange to get so hung up on it not being “real”.

Also, if move is the only available option, and move cant happen, you get a compiler error. If performance is correctness for a type that is expensive to copy, make copy not an option.

Also, there is a move constructor by default so it’s not opt-in, you opt out only if you start screwing around with copy / assignment / destructors which you usually shouldn’t need in modern code anyway.

Sure, the state of moving on the moved from object is unspecified, but really, I can’t think of a time when I’ve written code that would care. It’s kind of a non-problem.

If you really want to reuse an object after a move I question your motives, but you should just reinitilaise it by assigning a freshly constructed value and the result of that is of course standard.


> In any kind of correct code, the difference between a move and a copy is only performance. If a copy were to happen where a move was requested then the code is just as correct, so I find it strange to get so hung up on it not being “real”.

That's just not true when you take smart pointers into account. unique_ptr is pretty obvious, since it can't be copied. But shared_ptr is more devious, as there is a clear semantic difference between giving someone a copy of your shared_ptr vs moving your copy to them. And, given that destructors are often used for more than simple resource cleanup (e.g. they are sometimes used for releasing locks), the difference between a move and a copy can have a huge impact on program behavior.


Lambda captures make it pretty easy to create nasty memory safety issues via dangling references. Sure, you could create such issues with the equivalent manually written closure-structs, but lambda captures pave the dangerous path.


But cppcheck can find a lot of dangling references with lambda captures.


I would consider your counter examples rather compelling, thanks for the details. I never particularly enjoyed move semantics, so you have a point there.


There is still quite a bit of friction at the seams to the "core" language though. If you write a plain templated function, you have to wrap it in a lambda (or some other functor type, or std::function I guess) before you can do anything with it except calling it. Pattern matching is also quite verbose currently.

Whether this matters really depends on the code you're trying to write of course.


The fact that you can't make old code use the new features in no way makes the new features "half-assed". It may make the old code that, but not the features.

Or did you want them to deprecate large swathes of the previous standard?

As for the rest of your rant... it's mostly just a rant, with very little substance.


Those are all C++11 features, right? So they aren't much of an argument in favor of 14, 17, 20.


We also got Class Template Argument Deduction, structured bindings, various syntactic sugar (if (auto var = initializer) {}, if constexpr, fold expressions...), std::variant, std::optional, and a whole POSIX-style filesystem library, to name a few. And that's just C++17 alone.

It's nothing earth-shattering (that stuff is coming in C++20) but these are all features that improve the language and allow you to write better code. Consider e.g. if constexpr which can eliminate so much SFINAE cruft. Or CTAD to reduce the need for makeXYZ() functions.


14, 17 and 20 are full of small and big improvements: coroutines, contracts, modules, concepts, huge improvements for compile-time computation with constexpr, etc.


As an example, yes. Other versions offered similar benefits (C++14 in particular was a "bug fix" version for C++11, which I think most would be happy came quickly).


This. They keep adding half-finished features apparently for the sake of releasing every 3 years. Move types that may not move (std::move is just a cast / suggestion that if taken leaves the receiver in an indeterminate, useable state), pattern matching that isn't (std::variant). Frankly, I wish they'd stop.


> The 1998-2011 gap has made a lot of damage to c++ in term of "market share".

Not really... Rust 1.0 is from mid-2015, well past 2011.


Not saying Rust is bad by any means, but it has about 20 times smaller market share than C++ according https://www.tiobe.com/tiobe-index/ and 11 times smaller according to http://pypl.github.io/PYPL.html.


Tiobe and PYPL are not market share indices, they're googling indices. I'd be very surprised if Rust really has 1/20 of C++'s market share as in number of companies and programmers that use it.


True, they are not great indices. Indeed.com shows me around 40 times more job openings for C++ and StackOverflow shows 14 times more C++ questions this year. Github shows 12 times more commits in C++ this year. None of these are great by themselves but a guess of around 20x seems reasonable.


Rust only reached 1.0 status in 2015, keep that in mind.


This sounds like a great idea, can't believe I didn't see it earlier. Maybe I should adopt it for my personal projects (software, music, etc): pick a regular release schedule and stick to it, even if it means releasing less feature-rich stuff. Does anyone have experience with this approach?


Many of my personal projects have only existed because of deadlines.

If you go to shows or conferences, showing or talking about something is the best way to have something, and the deadline focuses you like nothing else.

(that said, sometimes I wonder if I would have something much more polished with a last-minute magical 2-week reprieve)


> If you go to shows or conferences, showing or talking about something is the best way to have something, and the deadline focuses you like nothing else.

The same applies for a personal project that's delivered as a gift to a family or friend. I guess I should write sometime about the Raspberry Pi-based Christmas present I gave my father in 2014, even though he stopped using it scarcely a year later. For this thread, the point is that the deadline made this the only personal coding side project that I've completed in several years.


OpenBSD does (2 releases every year, approximately 6 months apart).


This is sort of what scrum does isn't it? Release every cycle no matter what was completed. Whatever was not completed goes back into the backlog and potentially into the next cycle.


Ideally, yes.

In practice there are often complications, however.

Usually, it's just office politics: Those cases can be quite difficult -- you really need buy-in and TRUST from management (and have to do the right political plays, etc.) to be able to cut through the bullshit and "allow" feature slips. Books have been written about this scenario.

Rarely, deadlines are imposed by Real Politics, aka: law... which can make for Interesting Times. This can range from "quite difficult" to "too easy!", so I have no advice here.


> Release every cycle no matter what was completed.

What happens when your software is in a broken state that's worse than what you had before?


I guess the keyword is, "completed". If something is broken, I doubt it would be in master, or the team has done something terribly strange. Or they should be nicer to QA ;)


Rollback to last known good state. Hopefully that's not as far back as the previous release (been there, done that, all I got was this lousy egg on my face).


You don't have tests?


I actually hate it when software teams adopt the "we will release it when it is ready" stance, because it makes it practically impossible to plan and prepare for the new version.

I am also a strong believer that it is a sign of gross lack of internal discipline, or at least severe lack of confidence. It means you still suffer from "unknown unknowns" and therefore can't come up with and stick to a proper estimate (even if it is something broad, like Q2 2019). This doesn't say good things about a software team, in my opinion.

I know very well that estimates are difficult. But, again in my opinion, going with "we will release the next version on February 20th, now let's discuss what features we can implement during that timeframe" as opposed to "here is a list of features we want in the next version, now let's discuss when we can finish them by... actually never mind, we will just tell people 'when it is ready'".


Chrome and Firefox teams have, for years now.


Java moved to this approach a few years ago. They release every six months, which IMHO is too frequently.


This is scrum.


Considering that C++ has evolved a lot over the years (and grown quite large), what are good resources for a programmer to get started with the language in 2019?

I've heard Accelerated C++ is a good introduction, but it's quite old at this point. Would Accelerated C++ followed by Effective Modern C++ bring someone up to speed with modern C++? Is there a single book or online resource that would service this purpose better?


I'll second Scott Meyers and add that Essential C++ covers the most fundamental parts of the language. Books beyond that tend to cover much more specific and optional tools.

Reading Essential C++ cover-to-cover was very worthwhile in my job as a programmer in AAA games. Books beyond that have mostly involved thumbing around different items to see which I might find useful at some point. Games in particular tend to be very performance sensitive, but also compile-time and debug-performance-sensitive. A lot of the STL is not as respectful of those latter two, meaning a lot of game code uses little or even none of the STL. (Working with UE4 will involve using none, for example.) I'd definitely focus more attention on understanding core language features that the huge amount of content that exists in the STL.

By far the best element of C++ that a lot of other languages lack is const and const-correctness. The second best would be the robust features templates have in comparison to generics of other languages (though the applications allowed by that robustness can be mildly horrifying at times).


Const correctness in C++ is a nice feature, but to say it is missing in many other languages is a bit exaggeration. Many other languages offer a much superior tool - immutable data types. Immutability is stronger than C++ const correctness and easier to use at the same time.

Templates are nowhere near capabilities and ease of use of languages with proper (read: not accidental) macro/metaprogramming systems (e.g. Lisps) or languages with modern generic type systems designed from the ground up (Haskell, Scala/Dotty, Idris, etc). Templates are IMHO a powerful hack, but hack is still a hack with all the consequences - terrible error messages, slow compile times, difficult debugging, late error detection, a lot of accidental complexity caused by templates not being first-class citizens etc.


> Immutability is stronger than C++ const correctness and easier to use at the same time.

C++-style const appears to be stronger than just immutability, since you can have immutable objects in C++, but you can also pass const references to mutable objects.


A const reference to a mutable object doesn't guarantee that the object won't change contrary to an immutable object. Hence const is weaker than immutable.

You can have immutable objects in C++, but C++ offers almost nothing to make dealing with such objects fast and easy. Also the lack of GC makes designing persistent data structures an order of magnitude harder task than in most other languages.


That's only if you do not use a library and/or have no idea how shared_ptr is implemented.


The standard library doesn't come with persistent collections included. Just the fact that they are not standard like in some other languages, causes fragmentation.

As for shared_ptr, they are a good idea when you don't care about performance. And they don't solve cycles, which may appear in some structures (e.g. graphs).


That's because it is undefined behavior to cast a const object to a non-const object. Instead, you must tell the compiler via the mutable keyword or else it will make optimizations based on the assumption it can't change.


Precisely why it's harder to use. C++ is more expressive, and thus creating self-consistent designs is harder.


C++ is more expressive than what? Than C probably. Than Java/C# - arguable. Than Scala/Haskell/Rust/Python/Ruby/R - no way.


More expressive than Java, sure. Java doesn't have overloaded operators, or cv-qualifiers, or templates, or many many many other things that make the type design space more expressive.

Ruby is hardly expressive at all in this respect - you can express to the interpreter very little about types, and the interpreter won't help you much at all.


C++ doesn't have GC, reflection, annotations, code generation utilities (annotation processors, cglib, etc), first-class generic types, existential types, rich standard library etc. That's why it is "arguable".

> you can express to the interpreter very little about types

Dynamic types are still types. Only the error detection moment is different, but lack of static types doesn't mean low expressivity.


in which of these languages can you have types depending on values ? :=)


In Scala and Idris. Haskell has no direct support, but I believe you can get quite close with rank-2 types.

Also, typing is not the end of all the things. Most languages I listed have much stronger metaprogramming capabilities than C++. Scala, Rust, Template Haskell macro systems are superior to C++ templates.


You'll have to give me an example for that. Having static types is more expressive than not having static types, but I think types make designs easier to make and understand. const is just another layer to the type system.


It's another aspect of type design you need to make decisions for. If you can't see that, I can't help you.


> Const correctness in C++ is a nice feature, but to say it is missing in many other languages is a bit exaggeration. Many other languages offer a much superior tool - immutable data types. Immutability is stronger than C++ const correctness and easier to use at the same time.

Technically C++ const can be used to implement immutable types just as they exist in other languages (and can be hidden behind a library entry point) but I agree that conceptually it's easier to think of an immutable string or vector as an inherent property of the object rather than one applied. And in C++ I don't think you can prevent casting away constness.

> Templates are nowhere near capabilities and ease of use of languages with proper (read: not accidental) macro/metaprogramming systems (e.g. Lisps) or languages with modern generic type systems designed from the ground up (Haskell, Scala/Dotty, Idris, etc). Templates are IMHO a powerful hack, but hack is still a hack with all the consequences - terrible error messages, slow compile times, difficult debugging, late error detection, a lot of accidental complexity caused by templates not being first-class citizens etc.

Templates were not an accidental hack for macros; as Stroustrup once said to me, "the ecological niche of 'macro' had already been polluted so templates were my only way to put macros into the language." I agree it sucks next to lisp macrology (but as a lisp developer since the 1970s I would say that wouldn't I?) but hell, the language makes a distinction between expressions and statements so there's only so much you can do.


> And in C++ I don't think you can prevent casting away constness.

Well, UB prevents you from doing it if the object is originally const.

My bigger gripe with the C++ (and C) const system is the lack of transitivity. A function taking a const X& may still modify e.g. the contents of an exposed pointer member of X.


Is there a language with this behavior? I would find that very confusing.


> By far the best element of C++ that a lot of other languages lack is const and const-correctness.

Add to that volatile-correctness... which most people aren't even aware of. http://www.drdobbs.com/cpp/volatile-the-multithreaded-progra...


This article is nearly 2 decades old and gives very bad advice for multithreaded c++ programming. To a first approximation, volatile should never be used for multithreading safety.


That 2-decade-old article took some 5+ pages to thoroughly explain its ideas. Turns out it was (and still is) pretty compelling, and the changes in these past 2 decades don't really affect what it's saying.

You, who've surely carefully read the article, understood it in its entirety, and played around with the notion to get a feel for its upsides and downsides, very insightfully reduced it all down to "very bad advice" with zero elaboration. You find that compelling?

I would be careful with those "first-order approximations".


Volatile is not a memory barrier. Different threads can observe reordered accesses regardless of volatile.

There's a reason that it's been proposed for removal (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p115...)


Where did you see the article claim it was a memory barrier?


It doesn't say it's a memory barrier, but it absolutely has to be for the code to work.

I agree I should be careful with "first-order approximations", but honestly I was being gentle because I do love drdobbs. But all of the things it talked about have been replaced with things that aren't broken in subtle and hard to debug way.

Volatile simply cannot be used a general purpose, portable, synchronization primitive.


> It doesn't say it's a memory barrier, but it absolutely has to be for the code to work.

No, it doesn't say it because it's not trying to make the point that you assume it's trying to make. Honest question: did you fully read and digest the article before commenting? If so, tell me on precisely which line you saw a lack of a memory barrier causing a problem (describe the race condition & bug you found) and explain how exactly you found that to undermine the point of the article.


You don't even need to read the whole article to see the GP's point: the very first example with flag_ is concurrent, unsynchronized access to a shared volatile, and the article promotes it as "the right way".

Yes, it goes on to elaborate on a basically unrelated use of volatile to control access to member functions on classes, which deserves a separate discussion - but you don't even need to get past the first few paragraphs to see that it promulgates the idea that broken makes concurrent access safe. It doesn't.


The code in the article requires memory barriers. It doesn't have them. It's broken code.

The author is using volatile as though it implied a barrier.


> The code in the article requires memory barriers. It doesn't have them. It's broken code.

> The author is using volatile as though it implied a barrier.

No, you're just not reading the article. Please go read the article. And I don't mean skim. I mean actually read it with the assumption that you have zero clue what it's going to say, because that's more accurate than your current assumption. Then if you still think you're correct, please explain to exactly which line(s) in the code are broken and how precisely that actually undermines the points the article has been making. You will struggle to do this.

In case it helps, for your reference: the author isn't, and never was, a random C++ dummy.


Your comments in this thread have broken the HN guidelines. Would you mind reviewing them and sticking to the rules when posting here? We'd appreciate it.

https://news.ycombinator.com/newsguidelines.html

Getting personal, bringing up whether someone read an article properly, making uncharitable interpretations of other comments, snarking, and posting in the flamewar style are all things they ask you not to do and which we're trying to avoid here. Not that your comments were anything like as bad as some that we see, but even in seed form these things have ill effects.


Hi dang, thanks for the heads up (and yes, I'll review them). Trouble I've been having is I feel I already tried to follow the tips in the guideline in my initial comments (see [1]), but it didn't work -- people still commented claiming the article is recommending the opposite of what it's actually claiming, meaning they clearly did not read all of it. I'm stuck on what to do. If I can't explicitly tell them to go read the article, then what is the proper response?

Edit: Someone deleted one of their replies here. Just wanted to say thanks, I read it and I think it'll be helpful moving forward.

[1] https://news.ycombinator.com/item?id=20430310


> If I can't explicitly tell them to go read the article, then what is the proper response?

"This isn't what the article says. For instance, in paragraph n, the author states 'x, y and z.'"


This is basically what I tried in the comment of mine that I linked to, but it only works if they make a specific claim that the article refutes. It doesn't really work in response to "this article is very bad"...


> You, who've surely carefully read the article, understood it in its entirety, and played around with the notion to get a feel for its upsides and downsides, very insightfully reduced it all down to "very bad advice" with zero elaboration. You find that compelling?

I’m pretty sure you did not try to follow the guideline in your initial comments. (see [0])

[0] https://news.ycombinator.com/item?id=20430270


Yes, not in that comment. I was at a loss on how to reply to a comment that just trashed the article as "very bad" and left it at that; I'm thinking maybe I shouldn't have replied at all. But I tried to do things a little better in the next one. I failed regardless, though, so that's why I'm hoping someone can offer a new approach.


> Just like its better-known counterpart const, volatile is a type modifier. It's intended to be used in conjunction with variables that are accessed and modified in different threads.

Simply not true. It has limited use with memory mapped I/O (although even there it misses necessary guarantees), but is not intended to work with threads.

> So all you have to do to make Gadget's Wait/Wakeup combo work is to qualify flag_ appropriately:

    class Gadget
    {
    public:
        ... as above ...
    private:
        volatile bool flag_;
    };
Is not correct, and will not work reliably.

I spent some time working with Andrei at Facebook, and he's a smart guy, but this article is wrong.

Don't do what he says here.

Volatile needs to go away.


He's describing the current state of affairs in that statement and providing background context, which was that volatile was seen as a solution to the memory barrier issue at that time, which we know to be incorrect now, but which was the closest approximation the C++ standard had to a half-solution at the time. That's NOT the point of the aricle, it's just background context for 2001 readers. There's a whole article after that which does not tell you to use volatile that way, and the entire reason I posted it was that part. Did you read past that paragraph at all, till the end? Did you understand what the article was actually trying to tell you, or did you just try to find a code snippet that didn't look right without bothering to read the full article? Did you see he literally advises you in the end: "DON'T use volatile directly with primitive types"? The entire point of the article is to tell you about a use case that is explicitly not the one you're imagining.


The paragraph where he claims that volatile works?


The paragraph where he says "DON'T use volatile directly with primitive types".


Despite that it was written by Alexandrescu, I can say that without a doubt this 2001 article doesn't represent the current state of thought around MT programming and volatile. I'd think of it more as a historical artifact than anything.


The fact that it doesn't doesn't mean it shouldn't be. It's a damn useful method, it just didn't become popular.


It’s completely broken. One of the modern Meyers books even has a chapter on not using volatile in the Dr. Dobbs article manner.

When the article was written, there was no real alternative, and volatile accidentally worked nicely on certain architectures. It failed on others. It absolutely was never designed to do what you’re trying to defend. It’s always been non-portable, implementation and architecture behavior on how it handled memory read/write barriers. Now that there’s proper ways to do barriers portably, the volatile approach is terrible advice.

C++ 11 addressed this all in a proper manner, after much research and many papers on the matter. Since then, for major compilers on major architectures, the new C++11 features have been implemented correctly. Volatile has zero use for correct multi threading code. It only has use for memory mapped hardware from a single properly synchronized thread.

Your article, as people keep telling you but you seem unable to accept it, is wrong. It’s now absolutely not portable, it’s inherently broken, and leads to undefined, hard to debug, terrible behavior for threading issues.

Go dig up the backstory on how C++11 got its threading model and dig up the More Effective C++ chapter on it to learn why your article is bad.


It sounds like you don't get what the article's point is. The article is NOT using volatile as a barrier mechanism. It's using it as a compiler-enforced type annotation, which you strip away before accessing the variable of interest. It sounds like absolutely nobody here is willing to read the article because they think they already know everything the article could possibly be saying. Fine, I give up, you win. I've summarized it here for you.

The idea is this you can use volatile like below. It's pretty self-explanatory. Now can you look through this code and tell me where you see such a horrifying lack of memory barriers and undefined behavior? (And don't point out something irrelevant like how I didn't delete the copy constructor.)

  #include <mutex>

  template<class T>
  class lock_ptr
  {
      T *p;
  public:
      ~lock_ptr() { this->p->m.unlock(); }
      lock_ptr(volatile T *p) : p(const_cast<T *>(p)) { this->p->m.lock(); }
      T *operator->() const { return p; }
  };

  class MyClass
  {
      int x;
  public:
      MyClass() : x() { }
      mutable std::mutex m;
      void bar() { ++x; }
      void foo() volatile { return lock_ptr<MyClass>(this)->bar(); }
  };

  void worker(volatile MyClass *p)  // called in multiple threads
  {
      p->foo();  // thread-safe, and compiles fine
      p->bar();  // thread-unsafe, and compile-time error
  }

  #include <future>

  int main()
  {
      MyClass c;
      auto a = std::async(worker, &c);
      auto b = std::async(worker, &c);
      a.wait();
      b.wait();
      return c.x;
  }


> It sounds like you don’t get what the articles point is.

Yes I do. It’s simply wrong. What it says about type annotation is correct, but has zero to do with threading because volatile has zero meaning for accesses from different threads. It then uses volatile to (incorrectly) build threading code. You seem to think volatile has some usefulness for threaded code; it does not. You think volatile adds benefit to your code above; it does not. The type annotation does not give you the ability to have compilers check race conditions for you - it works on some and will fail on others.

Add volatile to your bar function. Oops, got race conditions. Volatile is not protecting your code; properly using mutexes is. Requiring programmers to intersperse volatile as some type annotation makes code more error prone, not less. One still has to correctly do the hard parts, but now with added confusion, verbosity, and treading on undefined behavior.

I think you believe his claim “We can make the compiler check race conditions for us.” because you’re relying on the same claim compilers will check volatile in the manner your code above does. That’s undefined behavior, open to compiler whims. Good luck with that. There’s a reason C++ added the more nuanced ordering specifications - to handle the myriad ways some architectures worked (and to mirror discoveries made in academic literature on the topic that happened after this article was written).

This article is even mentioned in the proposal to remove volatile from C++ altogether http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p115.... I’ve known about this for some time, and hacking in type annotations like this adds no value; it simply makes a mess.

More errors from the article, which is why people should stop citing it:

First sentence:

“The volatile keyword was devised to prevent compiler optimizations that might render code incorrect in the presence of certain asynchronous events.”

This is simply wrong. The article goes on to try to make multithreaded code correct using volatile.

More quotes from the article that are simply wrong: “Although both C and C++ Standards are conspicuously silent when it comes to threads, they do make a little concession to multithreading, in the form of the volatile keyword.” Wrong; see Sutter quote below. “Just like its better-known counterpart const, volatile is a type modifier. It's intended to be used in conjunction with variables that are accessed and modified in different threads.” Wrong. See Sutter quote, and ISO standards. Volatile was never intended for this, so was never safe for doing this. “In spite of its simplicity, LockingPtr is a very useful aid in writing correct multithreaded code. You should define objects that are shared between threads as volatile” wrong on so many levels. The referenced code will break on many, many architectures. There is simply no defense to this.

The article has dozens more incorrect statements and code samples trying to make threadsafe code via volatile.

I’ve written articles on this. I’ve taught professional programmers this. I’ve designed high performance C++ multithreaded code for quite a while. It’s simply wrong, full stop.

Here’s a correct destruction of the Dobbs article by someone who gets it [1]. They, like you, were once misled by this article.

The money quote, from Herb Sutter “Please remember this: Standard ISO C/C++ volatile is useless for multithreaded programming. No argument otherwise holds water; at best the code may appear to work on some compilers/platforms”

I suspect you’ll still stick to the claim this article has value, given your insistence so far against so many people giving you correct advice. Good luck.

[1] https://sites.google.com/site/kjellhedstrom2/stay-away-from-...


> “The volatile keyword was devised to prevent compiler optimizations that might render code incorrect in the presence of certain asynchronous events.” > This is simply wrong.

Hardware interrupts and UNIX signals are the asynchronous events in question, and C's volatile is still useful in those contexts, where there is only a single thread of execution.


Volatile still doesn’t protect you there, whereas C++ 11 atomic do. If the item you mark volatile is not changed at the cpu and cache level atomically, you’re going to access torn variables. I’ve been there and am certain about it. And pre C++ 11, there is no way to portably find out which operations are architecture atomic, so it was impossible to write such code portably. C++ 11 fixed all that, and there’s no reason to use volatile for any of this any more: use atomics, possibly with fine grained barriers if needed and understood.

Here’s a compiler showing that your use fails on some systems:

http://www.keil.com/support/docs/2801.htm


You're right that just "volatile" isn't enough; typically you'd declare the variable sig_atomic_t to be portable, which makes the necessary guarantees since C89 so predates C++11. (It does not guarantee anything regarding access from multiple threads, of course.)

The problem with std::atomic<T> is that it may be implemented with a mutex, in which case it can deadlock in a signal handler. But as you say, you can check for that with is_lock_free.


Yep. And this thread illustrates why threading is hard, especially in C++ :)

Oh, and sig_atomic_t is not guaranteed thread-safe, only signal safe. The difference is when you move your code from a single cpu to dual cpu system it breaks. I ran across this some time ago moving stuff to an ESP32.

Atomic so far works best across the chips I’m poking at.


It shouldn't be.

The stuff the article recommends is straight up UB in modern C++. Volatile has never been specified to work properly with threads, but before C++11 when there was no alternative, some limited use in that context, preferably hidden away from the casual user, may have been acceptable. Recommending these techniques today, however, makes no sense.


It should be.

The stuff you're taking about is not the same stuff I'm talking about. There's nothing UB about the locking pointer pattern and how it uses volatile. Read the article in full. It has a specific thesis that is just as valid today as it was 20 years ago, and that thesis is NOT the 2001 malpractice you're talking about.


Yes the locking pointer pattern shown there is also UB because it is UB to define something as volatile and then cast away the volatile and use it, which is the core of that technique.

Yes, it's not UB in the race sense, because he is using mutxes everywhere there and just sort of overloading the volatile qualifier to catch member function calls outside the lock. In addition to being UB, it's weird - why not just escapsulate the object itself inside a class that only hands out access under control of a lock? That is, why have the volatile object passed in from the outside if you will never legally access the object?

The very premise of this article, that volatile is for concurrently modified objects across threads is false in modern C++ - and the very first example is a faulty use of volatile under the assumption that unguarded concurrent volatile access is safe.


> it is UB to define something as volatile

Can you point me to which part of the standard says that it's UB to cast away a volatile reference to a non-volatile object? See my example in [1] if you don't see why the object itself doesn't need to be volatile.

> it's weird

No, you're just not used to it. It's perfectly fine once you use it a bit. And regardless, there's quite a huge chasm between "it's completely wrong and undefined behavior" and "I don't like it, it's weird".

> why not just escapsulate the object itself inside a class that only hands out access under control of a lock?

That's a separate discussion. Right now we need to get the UB-ness claims out of the way. Once we agree it's correct in the first place then we can discuss whether it looks "weird" or what its use cases might be.

[1] https://news.ycombinator.com/item?id=20430882


> Can you point me to which part of the standard says that it's UB to cast away a volatile reference to a non-volatile object?

That is not UB, it's only UB if the object was defined volatile, which is what the article does, explicitly:

> You should define objects that are shared between threads as volatile and never use const_cast with them — always use LockingPtr automatic objects. Let's illustrate this with an example [Example goes on to define the object volatile]

> No, you're just not used to it. It's perfectly fine once you use it a bit. And regardless, there's quite a huge chasm between "it's completely wrong and undefined behavior" and "I don't like it, it's weird".

There might be a glimmer of something interesting in overloading the use of volatile on user-defined types as a second type of access control analogous to "const" but that you use for some other purpose, e.g., gating access to functions based on their tread-safety, or anything else really.

This article doesn't make a convincing case for it because the first example is UB, the second example is UB, it propagates the broken notion that volatile is useful for concurrent access to primitive types, it doesn't include any discussion of modern techniques like std::atomic<>, etc. Of course, that's no fault of the author, who wrote it 2001 when the well-defined way of doing things was 10 years away.

It's mostly a problem when people try to promote this, today, as an insightful view on volatile and multithreaded code. As a whole, it isn't and propagates various falsehoods that people have been trying to get rid of forever. What glimmer of an interesting point is in there regarding using volatile-qualified objects as a second level of access control orthogonal to const is washed out by the other problems.

> That's a separate discussion. Right now we need to get the UB-ness claims out of the way.

It's UB. Just admit that it's UB because the flag_ example does concurrent access to an object from different threads, at least one of which is a write, and the LockingPtr and follow-on examples are UB because they involve casting away volatile from a volatile-defined object.

If you can agree with that, then maybe you can present a related technique, different to the one in the article, which uses volatile in a useful way.


"Just admit" what? That applying volatile to an object and casting that away like with the flag_ example is UB? Yeah, I that's UB. It also wasn't the point of the article, and the use of volatile required for the technique the article is what actually matters, which isn't UB.

Can we step back for a second?

Go back to my top comment. Why did I even post this article in the first place? The point was that "volatile-correctness" is (basically) awesome, and it's hard to get something like it in other languages. This article is where the idea originated from, so I linked to it. i.e.: "There's something called volatile-correctness, which you can learn about by reading this article." The point was not "read this article and blindly sprinkle volatile across your codebase in exactly the same manner and you'll magically get thread safety".

What were you supposed to take away from the article? The idea of volatile-correctness, the idea that you can use a locking pointer to regulate multithreaded access to a class's methods. The idea that volatile acts as a helpful type annotation in this regard, independently of its well-known effects on primitive objects. You can apply it easily without ever marking objects as volatile, like I just showed you in that example. Yet somehow instead of actually extracting the fundamental concepts and ideas from the article, you and everyone else here are trashing it by insisting that the only possible way anyone can read that article is a naive verbatim copy-paste of its text from 2001 to 2019...? Why?

> If you can agree with that, then maybe you can present a related technique, different to the one in the article, which uses volatile in a useful way.

But omitting a couple volatiles doesn't make it a different technique! You just skip the incorrect uses of volatile. The technique is the same..


"Just admit" that the stuff in the article is UB, because you were going around badgering people to point out the UB, and because your last post demands: "Right now we need to get the UB-ness claims out of the way. Once we agree it's correct in the first place..."

So yes, let's get the UB claims out of the way - but agreeing that it's UB. Not just the flag_ example, but with the LockingPtr example that is the "point" of the article.

> you and everyone else here are trashing it

To be clear, I'm not really "trashing" the article. It's a relic of its time. I am trashing the idea that it's somehow a good introduction to any clever MT technique today.

> by insisting that the only possible way anyone can read that article is a naive verbatim copy-paste of its text from 2001 to 2019...? Why?

I explained it earlier: because the article has too many flaws to be a clean illustration of the technique. It starts with UB, ends with UB, makes wrong assertions about the purpose of volatile, etc.

Again, I agree there might be a glimmer of something here - but this article isn't the way to show it. The reaction you got was expected and fine. I can imagine a different article, written today, without the claims about the purpose of volatile, without the flag example, without the UB of casting away volatile from volatile objects, acknowledging the existence of std::atomic and how this technique complements or replaces it. That could be useful.

I looked at your example, and yes, I see the potential if you want to have an object with a thread-safe and non-threadsafe interface split like that (or really any split: you can overload volatile like that for any type of access control where you can cleanly divide the functions like that). It has the unfortunate side effect that volatile is not for that, and it implicitly makes all your members volatile and hence may pessimize code generation. I guess it doesn't matter that much if all the volatile functions follow the pattern of immediately shelling out to a non-volatile function though.


Maybe someone should write a more modern version of the article, I don't know.

I would also not expect it to pessimize code generation, since the final dereference should always be of a non-volatile pointer, though I suppose an optimizer bug might make it behave otherwise.

You can combine it with atomic, they're not substitutes. It could let you implement two versions of an algorithm: a lock-free multithreaded one, along with a single-threaded one that uses relaxed accesses (or even fully non-atomic accesses, had C++ allowed that). And then you'd auto-dispatch on the volatile modifier. The possibilities are really endless; I'm sure the limiting factor here is our imagination.

I've thought about the other types of split for a long time too, and I haven't managed to come up with other compelling use cases, even though I also feel they should exist. It would be interesting if someone could come up with one, because the ability to have commutative tags on a type seems really powerful.


“A tour of C++” (second edition) by Stroustrup is a great starting point.


C++ is deep and nuanced, so reading books will help structure your learning. I've found Scott Meyer's books to be great for starting out. Those will give you a fantastic foundation, from which you can dive deeper. Those and others have added significantly to my ability to write clean and maintainable software.

This SO post is a great guide for where to look: https://stackoverflow.com/questions/388242/the-definitive-c-...


I've been learning C++ for the first time starting in 2019.

I used Marc Gregoire's Professional C++ which has a version that was published last year and includes C++17, alongside Scott Meyer's line of books, and watching a variety of YouTube videos (e.g. Jason Turner, CPP talks...)


I second the recommendation of Professional C++. I am just a self-taught programmer, and to be quite honest, felt I was getting in a bit over my head by buying a book aimed at professionals. But I have found the material to be perfectly accessible even for someone without a CS degree, and I am now using C++ for my personal projects. I cannot recommend that book highly enough. Just my $.02


It's hard to name just a single resource and some HN mates have already listed excellent resources. I'd like to add a little bit that perhaps someone will find useful: - Books by B. Stroustrup are excellent if you already know programming. I wish for a new edition of "The C++ Programming Language" to be honest. I learnt a lot of useful "tips" from many books but only by reading his books was I able to see why C++ made certain choices. That really helped me "level up".


I recommend going through some recent CppCon presentations, especially these by committee members and people who implemented features/libraries -- Bjarne Stroustrup, Herb Sutter, Howard Hinnant, Chandler Carruth, etc.

Also have a look at Mike Acton's DOD videos -- he'll tell you that modern c++ (and even oop) is garbage, and he'll be right for his own particular case :)


Stroustrup -> Meyers -> Alexandrescu (optional but lots of very clever ideas)


Tour of C++, 2nd edition, from Bjarne Stroustroup is quite good to get to know all major updates how to write good C++20 code (the book goes through all relevant updates since C++11).


I’m sure the committee has thought about this 1000x as much as me. But, I wish they would go further and release every year. IMHO, that would relieve most of the pressure to push out features that aren’t fully baked. 3 more years is a long time to wait if you’ve already been working on something for several years. 1 more year, when you know you could use “maybe a couple more months” (likely 8 in practice), not so bad.


The final review process (for ISO standards) is most probably the bottleneck there.


ISO generally allows only 5 year cycle. 3 years for C++ is a special case. Furthermore, the bureaucratic overhead will dominate the proceedings then (not unlike thread thrashing).


I’m a huge fan of this strategy; we use it with Rust, for virtually identical reasons. Ruby also ships each Christmas.


Can't speak for Rust, and I imagine it makes more sense for them since it's a young language, but I wish the C++ folks would chill a bit and take their time a little more; it's getting kind of ridiculous. They don't need to move fast and break things (which they literally have; see e.g. result_of and invoke_result). By the time the tooling across various platforms has finally had a chance to take a breath, they've already got the next version of the standard out, and it's getting hard to keep up with all their changes. I feel there's a better balance between the 13 years ('98->'11; '03 wasn't really a new standard) and the 3 years we have now. Maybe every 5-6 years?


I don’t think that making the cycle time longer helps; I think making it shorter helps. There’s less pressure to put something in a specific release when they happen less frequently. You can, counter-intuitively, take your time.

I know less about the exact details, but I also think that having something to play with helps; I believe that this is what’s going on with with the move towards more TSes before shipping spec text, right? Being able to try things out in nightly helps us a lot.


If your goal is to help improve C++, sure, a shorter cycle time is better. If your goal is to use the language as a communication mechanism... having it change constantly underneath you isn't helpful in my experience. And it's not necessary to release a new standard to let people try things out; you can let people play around with new features without iterating on them as formal standards.


“Change constantly” is too broad; there are different kinds of changes. Stability is paramount.


If you find it too broad then just narrow it down in your mind. You won't be left with the null set.


> There’s less pressure to put something in a specific release when they happen less frequently.

I think this phrasing contradicts your point. Do you mean "more frequently", or am I missing something?


I did, my mistake!


You’re missing one of the major points in the post: making release cycles take a long time means that small features aren’t finalized for that long period either.

You seem to be thinking that the compiler devs wait until a release is done to start implementing a release, which is not what happens. Part of the big benefit of releasing every three years is that it makes continuous development actually possible: compiler devs have a better idea of how stable things are, and so which things aren’t going to need to be completely reimplement in another 3 years.

I do not understand where you get a “catch their breath” mentality - none of the modern c++ compilers have a three year cadence, most run with approximately annual major releases.


No... there's more to programming than just having a shiny new compiler. VS2017 for example had problems that held me back at VS2015, so I couldn't upgrade until I got to VS2019, but meanwhile projects moved on. The 1-year period you're imagining was really a ~4-year period where the language syntax had changed, the stdlib had changed, and projects had moved on, so I couldn't even compile things. Heck, I couldn't even read the language anymore, let alone worry about all the new stuff in stdlib.

And it's not like they've been only adding small features every 3 years. If they were then my position would be different too. But small features being small, people can live without them. Not having them is a lot better than not being able to use the language at all.


Please give some specific examples and not just vague "problems".


Huh? What would that accomplish for you?


You just said, 'I had a whole bunch of problems', and that's why I believe the standard should change more slowly.

You didn't actually say what those problems were, you didn't provide any evidence that any bugs you hit were as a result of an increased standardization rate.

You're also assuming that somehow spending more time with unstable language features will result in fewer problems, which is something where we know you're wrong. Because we've seen what happens when a longer cycle exists, we know your claim that the compilers will be less buggy is exactly wrong.


> you didn't provide any evidence that any bugs you hit were as a result of an increased standardization rate.

You misunderstood the argument. The claim was not that the VS problems were due to the increased standardization rate. They weren't C++-related at all. Rather, the problem was that I couldn't move onto 2017 due to unrelated VS problems, even though I needed to move onto it in order to be able to work on C++ projects that had already started using C++17.

> You're also assuming that somehow spending more time with unstable language features will result in fewer problems, which is something where we know you're wrong. Because we've seen what happens when a longer cycle exists, we know your claim that the compilers will be less buggy is exactly wrong.

Again, I was not saying compiler bugs increase when you rush the standard. See above.


So maybe choose not to use the very newest features? That's what I've always done and it's worked quite well.


That's exactly what I do on my end, but it's more than annoying when I have to deal with other projects and I suddenly can't even compile them anymore until I upgrade my toolchain and learn what's basically a new language.


This is similar to Java's transition from a feature-based release model to a time-based release model. The first rapid release was Java 10 in March 2018.


So, in 2019, if you want the execution speed and efficiency of C++, high level features, without the complexity of C++ or rust, how about Nim??


Isn't nim garbage collected? The better alternative is probably Zig.


C++ needs to merge with Python to create the uber-language: Python for rapid development, and drop to C++ for performance. Like Cython, but with native integration.


I'm giving a talk [1] at PyCon AU in a few weeks on a similar topic. Though you're suggesting Python/C++ and I'm covering MicroPython/C.

The concept is to use MicroPython on embedded devices but, if performance is lacking, drop into C to create a module that can be easily accessed from MicroPython.

I've found this to be an exceptionally productive embedded development environment!

[1] https://2019.pycon-au.org/talks/extending-micropython-using-...


Boost.Python[0] is not an "official merging" of C++ and Python, but might be something of interest to you. On that page is the link "Building Hybrid Systems with Boost.Python", which is an intro article on it.

0 - https://www.boost.org/doc/libs/1_70_0/libs/python/doc/html/i...


Please don't because you know we'll end up with a frankenstein language with the cons of both rather than the pros of both.


What stops you from doing this now? This is pretty much how Python works currently when you need performance.


For the most part, today's solutions let you write python modules in C/C++. So your core app is in python. I am thinking of the scenario where your core is in C++ and extendable in python for the business logic -- where the core can be a trading system engine, or a webserver for example. The rationale is python serves the algos/data scientists/quants use case really well, and C++ does the "engine" part of things really well.


No. In my opinion, one of the biggest benefits of Python is that the code looks consistent. There is a single "correct" way to do things. Merging it with C++ would remove this. I use both C++ and Python on a regular basis.


as long as basic things like __alignment__ are still fundamentally broken I simply do not care about the syntax. (I mean it doesn't get anything more basic than this, no?)


What's wrong with alignment?


well there is this: #ifdef _MSC_VER __declspec( align(16) ) struct float4 { float v[4]; }; #else struct float4 { float v[4]; } __attribute__ ((aligned(16))); #endif

but the main problem ist tthat alignment is _not_ part of the typing system. ( alignas vs alignof )

butt, to put it more general: 90% of your performance is in memory access and compilers are rubbish optimizing those, they are however getting increasingly good at the 10%. see also https://www.youtube.com/watch?v=rX0ItVEVjHc for realworld examples/exploration


> well there is this: #ifdef _MSC_VER __declspec( align(16) ) struct float4 { float v[4]; }; #else struct float4 { float v[4]; } __attribute__ ((aligned(16))); #endif

I don't understand what you are trying to say here? Why not just use

    struct alignas(16) float4 { float v[4]; };
You can even put both __declspec( align(16) ) and __attribute__ ((aligned(16))) in the same place if you want to have a fallback for older compilers.


Glad to have left C++ behind. I thank it for making me a better software engineer but I don't think I want to work with core dumps for the rest of my life.


Are core dumps per se, really bad? I think, that the feature is useful. What is bad is frequency of it happening. For example: are Python's tracebacks better? Maybe I had a bad luck, but in my experience they happen with similar frequency. Core dumps allow you to go to debugger and see what has happened. You can easily send the file. If the crashed software goes without source you can't do a thing, but that's what was the intention. With obfuscated Python one would also be left with nothing.


I speculate the parent comment was referring to the frequency of core dumps: that he/she experiences more of them compared to the Python tracebacks you mentioned.

Disclaimer, I'm speculating that's what they meant, because I feel the same way. It's more common(for me) for Python to refuse to run than to run anyway and later crash, compared to C++. That said, Python isn't a good analogy there...Rust or Golang are more likely to refuse to run for bad code instead of running anyway and crashing later.


Core dumps aren't bad themselves. But the ease and frequency by which other developers (and myself) can cause a core is just insane. It makes C++ more of a headache than anything.


I know; I prefer the ITS/Lispm environment in which you were always running in the debugger. But I think most users wouldn't like that. Bill Gates laughed at me when I once suggested it in passing.


> I thank it for making me a better software engineer

It's especially nice that Rust has kept this basic attitude from C/C++ and in fact strengthened it a lot and aligned it with modern trends, even as it got rid of the annoying "core dumped" part almost in its entirety.

(The latest tagline of Rust is "A language empowering everyone to build reliable and efficient software." Do notice the everyone part, and especially the empowering bit - as opposed to letting even novice developers hobble themselves with substandard, bootcamp-level software-dev practices!)


I wish Rust would have kept saner OOP style classes from C++ instead of this bizarre trait stuff. The whole language feels like everything is just different for the sake of being different. Why is it "fn blah (x -> int.. -> int" or whatever when the rest of the tokens seem designed to save keystrokes at the cost of readability? Everyone is used to "int x(int y..". I've learned it some and the concepts around memory ownership and everything are good but the syntax is needlessly weird and annoying.


Special keywords like `fn` make parsing simpler. Trailing return types are useful in languages that support generic programming, because they make it easier to write a function whose return type depends on the types of its generic inputs. Even C++ has this feature now, using decltype and arrow notation:

    template<typename T, typename U>
    auto add(T t, U u) -> decltype(t + u)
    {
        return t + u;
    }
C++ is notoriously hard to parse. Consider the "most vexing parse", or the fact that refactoring tools for C++ are always flakier than their Java / C# / Go equivalents, or the fact that https://cdecl.org/ exists. New languages in the "expressive, high performance" niche cannot continue to be held back by C++ syntax.


Tools can use LLVM infrastructure to get ast. No need to parse it yourself.


The AST is still really complex though (in terms of corner cases to cover).


C# and Java as you mentioned use very C++-like syntax.


To a first approximation, yes. But consider that the C++ grammar has ~100 more rules than Java (~170 vs. ~280), and parts of the C++ grammar are strongly context-dependent, which isn't true in Java.


I actually greatly appreciate that function definitions begin with a dedicated keyword in Rust. I've always found it rather painful that definitions don't stand out from function calls in C and C++.


Agree. With a dedicated keyword, you can find a function definition using just grep.


Rust is not based on C++. People coming from Haskell/ML aren't used to C's backwards type declarations, and that's why they're not in Rust; the people who make Rust have experience with more than just C/C++. So it's not different for the sake of being different, it is actually trying to be similar ... just not similar to C.


Similar to languages much fewer people use, instead of languages more commonly used as systems languages which Rust is meant to be.


In general, it’s a thing more newer languages are moving to, because it’s regarded as superior for a few different reasons. Mostly, it keeps the syntax more regular when you have type inference. See Kotlin as another recent example.

In rust, we have additional reasons, and that’s because it’s not

  let name: type = expression;
It’s

  let pattern: type = expression;
Patterns offer more power than simple variable declarations. The names may not correspond 1-1 with the type, because you can create multiple names by destructuring more complex types


C++ classes vs. Rust traits isn't just a matter of syntax differences.

Btw the function syntax is:

    fn foo(x: u16, y: u16) -> u32
If it were more C++ like it'd be:

    u32 foo(u16 x, u16 y)
Which is less keystrokes if anything.


Amusingly, C++ also supports Rust-style function headers, and has since C++11 (which is well before Rust was ever on the radar; really both C++ and Rust were inspired by ML here). The following two are identical in C++:

    u32 foo(u16 x, u16 y)
    auto foo(u16 x, u16 y) -> u32
IIRC there are contexts where the former does not work and the latter is required, which I believe means that ML-style function headers are strictly more powerful in C++ than the original C-style function headers.


Exactly, they do stuff like fn yet make it longer than necessary otherwise.


I don't think it should be designed to save keystrokes at the cost of readability.


It already does that elsewhere, yet strange backwards type definitions are much less readable by default to most programmers. It's like making us learn French when we could have learned British English, assuming American English as a starting point.


Plenty of other languages including a large percent of recent languages have the return type on the right.

Haskell, Visual Basic, Scala, F#, Go, (Rust), Kotlin, TypeScript, Swift.

Even c++ allows you to put return types on the right. Why? Because putting the types on the right allows return type deduction. I think it's better to have a single syntax (all types on the right) rather than 2 syntaxes like c++ has.

https://medium.com/@elizarov/types-are-moving-to-the-right-2...


I think you have to be careful when presuming to speak for "most programmers".


Here, at least 40% of programminging is done in C style syntax. The rest isn't in any consistent majority style. Most programmers are probably most familiar with C style and almost all are familiar in some form.

https://www.tiobe.com/tiobe-index/


I don't think anyone's disputing the claim that "strange backwards type definitions are much less [familiar]". We're saying that familiar is not the same thing as readable(-once-you-know-what-it-is), and that you're talking about the former, not the latter.


It depends... if your return type is nested inside a class and you're defining a member function, you'll have to write down a qualified return type name before the qualified member function name, whereas if the return type follows the function name, it can be unqualified because at that point the class extends the scope used for name lookup.

http://www.stroustrup.com/C++11FAQ.html#suffix-return


I am not a rust expert, but I assume the recent many programming languages have a keyword for function declaration is so that functions can be first class objects.


This is one of the most trivial points I've ever heard for liking or disliking a programming language.


I agree. The language design is great, but they purposely use weird conventions everywhere. And by weird, I mean foreign to C++ programmers which is 90% of their user base.

They should have done what java did. Copy C++ syntax, only change it when needed. I've ported over java code where 2/3 of lines are nearly identical.

The async debacle is a great example of this. They settled on weird syntax instead of doing what every other language does, because of some holier than thou acedemic snobbery. If every other language does it that way, it would have worked fine in rust.


The Rust type syntax is not weird. C-like languages are weird, because their syntax grew by accretion from a much simpler use case where something like "int x;" or even "int f(int a, int b);" could make sense. But even typedefs famously screw that up, never mind everything else.

> because of some holier than thou acedemic snobbery

The async syntax was one of the most widely discussed issues in Rust development, and ergonomics concerns were key in what eventually was chosen. It's very misleading to describe it as your comment does.

Re: parent comment, the Rust programming language book (free online) has a very nice section describing OOP-like patterns in Rust - as it turns out, the "good parts" of OOP are very nicely supported, and in a far simpler, more orthogonal way than what you get in C++. This means fewer dark corners in the language and something far easier to work with overall. Just because it may be different from what we did back in the 1990s, doesn't make it wrong!


I really like rust, the design is great. I'm glad they cleaned up OOP and ditched nasty C++ stuff like templates.

However, I went through the book recently and I'm quite annoyed that much of the syntax differs needlessly from C like languages. It massively increases the cognitive overhead for someone coming from Java, C++, C#, C-like language world.

Rust is a systems language, it doesn't even have a runtime. 90%+ system level work is done in C-like languages. Rust syntax differs in countless pointless ways. I'm not saying it's wrong, it's just different in ways that don't matter from all other popular systems languages. Which is dumb.

Things like "fn" instead of "function" and async syntax make Rust difficult to adopt by the target user base. Why fn? Is saving 6 characters worth confusing everyone?


> much of the syntax differs needlessly from C like languages. It massively increases the cognitive overhead for someone coming from Java, C++, C#, C-like language world.

And it decreases the cognitive overhead for someone coming from Python, Ruby, Go, heck even Haskell or Ocaml. "Cognitive overhead" over a simpler, more elegant syntax (and I think I've made the case that Rust typing syntax is simpler once you move beyond trivial cases!) is a temporary issue anyway - you get used to it very quickly. What I find quite puzzling here is the particular issue you're complaining about, wrt. the C/C++ type declarations. You actually like having to write out things like "template" and "typename"? Now of course Rust syntax is rather C-like in other ways, but still!


(It’s debatable that 90% of our user base is C++ programmers; in 2016, the last time we collected this data, it was 40%, and there’s no reason to believe it’s shifted THAT much since then.)


One aspect of this that you might be overlooking is that the syntax for function pointers has to follow the syntax of function declarations.

Why should a new language inflict this horror on its users:

let i32 (*foobar)(i32, i32) = add;

When it can do this instead:

let foobar: fn(i32, i32) -> i32 = add;




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: