Hacker News new | past | comments | ask | show | jobs | submit login

Considering that C++ has evolved a lot over the years (and grown quite large), what are good resources for a programmer to get started with the language in 2019?

I've heard Accelerated C++ is a good introduction, but it's quite old at this point. Would Accelerated C++ followed by Effective Modern C++ bring someone up to speed with modern C++? Is there a single book or online resource that would service this purpose better?




I'll second Scott Meyers and add that Essential C++ covers the most fundamental parts of the language. Books beyond that tend to cover much more specific and optional tools.

Reading Essential C++ cover-to-cover was very worthwhile in my job as a programmer in AAA games. Books beyond that have mostly involved thumbing around different items to see which I might find useful at some point. Games in particular tend to be very performance sensitive, but also compile-time and debug-performance-sensitive. A lot of the STL is not as respectful of those latter two, meaning a lot of game code uses little or even none of the STL. (Working with UE4 will involve using none, for example.) I'd definitely focus more attention on understanding core language features that the huge amount of content that exists in the STL.

By far the best element of C++ that a lot of other languages lack is const and const-correctness. The second best would be the robust features templates have in comparison to generics of other languages (though the applications allowed by that robustness can be mildly horrifying at times).


Const correctness in C++ is a nice feature, but to say it is missing in many other languages is a bit exaggeration. Many other languages offer a much superior tool - immutable data types. Immutability is stronger than C++ const correctness and easier to use at the same time.

Templates are nowhere near capabilities and ease of use of languages with proper (read: not accidental) macro/metaprogramming systems (e.g. Lisps) or languages with modern generic type systems designed from the ground up (Haskell, Scala/Dotty, Idris, etc). Templates are IMHO a powerful hack, but hack is still a hack with all the consequences - terrible error messages, slow compile times, difficult debugging, late error detection, a lot of accidental complexity caused by templates not being first-class citizens etc.


> Immutability is stronger than C++ const correctness and easier to use at the same time.

C++-style const appears to be stronger than just immutability, since you can have immutable objects in C++, but you can also pass const references to mutable objects.


A const reference to a mutable object doesn't guarantee that the object won't change contrary to an immutable object. Hence const is weaker than immutable.

You can have immutable objects in C++, but C++ offers almost nothing to make dealing with such objects fast and easy. Also the lack of GC makes designing persistent data structures an order of magnitude harder task than in most other languages.


That's only if you do not use a library and/or have no idea how shared_ptr is implemented.


The standard library doesn't come with persistent collections included. Just the fact that they are not standard like in some other languages, causes fragmentation.

As for shared_ptr, they are a good idea when you don't care about performance. And they don't solve cycles, which may appear in some structures (e.g. graphs).


That's because it is undefined behavior to cast a const object to a non-const object. Instead, you must tell the compiler via the mutable keyword or else it will make optimizations based on the assumption it can't change.


Precisely why it's harder to use. C++ is more expressive, and thus creating self-consistent designs is harder.


C++ is more expressive than what? Than C probably. Than Java/C# - arguable. Than Scala/Haskell/Rust/Python/Ruby/R - no way.


More expressive than Java, sure. Java doesn't have overloaded operators, or cv-qualifiers, or templates, or many many many other things that make the type design space more expressive.

Ruby is hardly expressive at all in this respect - you can express to the interpreter very little about types, and the interpreter won't help you much at all.


C++ doesn't have GC, reflection, annotations, code generation utilities (annotation processors, cglib, etc), first-class generic types, existential types, rich standard library etc. That's why it is "arguable".

> you can express to the interpreter very little about types

Dynamic types are still types. Only the error detection moment is different, but lack of static types doesn't mean low expressivity.


in which of these languages can you have types depending on values ? :=)


In Scala and Idris. Haskell has no direct support, but I believe you can get quite close with rank-2 types.

Also, typing is not the end of all the things. Most languages I listed have much stronger metaprogramming capabilities than C++. Scala, Rust, Template Haskell macro systems are superior to C++ templates.


You'll have to give me an example for that. Having static types is more expressive than not having static types, but I think types make designs easier to make and understand. const is just another layer to the type system.


It's another aspect of type design you need to make decisions for. If you can't see that, I can't help you.


> Const correctness in C++ is a nice feature, but to say it is missing in many other languages is a bit exaggeration. Many other languages offer a much superior tool - immutable data types. Immutability is stronger than C++ const correctness and easier to use at the same time.

Technically C++ const can be used to implement immutable types just as they exist in other languages (and can be hidden behind a library entry point) but I agree that conceptually it's easier to think of an immutable string or vector as an inherent property of the object rather than one applied. And in C++ I don't think you can prevent casting away constness.

> Templates are nowhere near capabilities and ease of use of languages with proper (read: not accidental) macro/metaprogramming systems (e.g. Lisps) or languages with modern generic type systems designed from the ground up (Haskell, Scala/Dotty, Idris, etc). Templates are IMHO a powerful hack, but hack is still a hack with all the consequences - terrible error messages, slow compile times, difficult debugging, late error detection, a lot of accidental complexity caused by templates not being first-class citizens etc.

Templates were not an accidental hack for macros; as Stroustrup once said to me, "the ecological niche of 'macro' had already been polluted so templates were my only way to put macros into the language." I agree it sucks next to lisp macrology (but as a lisp developer since the 1970s I would say that wouldn't I?) but hell, the language makes a distinction between expressions and statements so there's only so much you can do.


> And in C++ I don't think you can prevent casting away constness.

Well, UB prevents you from doing it if the object is originally const.

My bigger gripe with the C++ (and C) const system is the lack of transitivity. A function taking a const X& may still modify e.g. the contents of an exposed pointer member of X.


Is there a language with this behavior? I would find that very confusing.


> By far the best element of C++ that a lot of other languages lack is const and const-correctness.

Add to that volatile-correctness... which most people aren't even aware of. http://www.drdobbs.com/cpp/volatile-the-multithreaded-progra...


This article is nearly 2 decades old and gives very bad advice for multithreaded c++ programming. To a first approximation, volatile should never be used for multithreading safety.


That 2-decade-old article took some 5+ pages to thoroughly explain its ideas. Turns out it was (and still is) pretty compelling, and the changes in these past 2 decades don't really affect what it's saying.

You, who've surely carefully read the article, understood it in its entirety, and played around with the notion to get a feel for its upsides and downsides, very insightfully reduced it all down to "very bad advice" with zero elaboration. You find that compelling?

I would be careful with those "first-order approximations".


Volatile is not a memory barrier. Different threads can observe reordered accesses regardless of volatile.

There's a reason that it's been proposed for removal (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p115...)


Where did you see the article claim it was a memory barrier?


It doesn't say it's a memory barrier, but it absolutely has to be for the code to work.

I agree I should be careful with "first-order approximations", but honestly I was being gentle because I do love drdobbs. But all of the things it talked about have been replaced with things that aren't broken in subtle and hard to debug way.

Volatile simply cannot be used a general purpose, portable, synchronization primitive.


> It doesn't say it's a memory barrier, but it absolutely has to be for the code to work.

No, it doesn't say it because it's not trying to make the point that you assume it's trying to make. Honest question: did you fully read and digest the article before commenting? If so, tell me on precisely which line you saw a lack of a memory barrier causing a problem (describe the race condition & bug you found) and explain how exactly you found that to undermine the point of the article.


You don't even need to read the whole article to see the GP's point: the very first example with flag_ is concurrent, unsynchronized access to a shared volatile, and the article promotes it as "the right way".

Yes, it goes on to elaborate on a basically unrelated use of volatile to control access to member functions on classes, which deserves a separate discussion - but you don't even need to get past the first few paragraphs to see that it promulgates the idea that broken makes concurrent access safe. It doesn't.


The code in the article requires memory barriers. It doesn't have them. It's broken code.

The author is using volatile as though it implied a barrier.


> The code in the article requires memory barriers. It doesn't have them. It's broken code.

> The author is using volatile as though it implied a barrier.

No, you're just not reading the article. Please go read the article. And I don't mean skim. I mean actually read it with the assumption that you have zero clue what it's going to say, because that's more accurate than your current assumption. Then if you still think you're correct, please explain to exactly which line(s) in the code are broken and how precisely that actually undermines the points the article has been making. You will struggle to do this.

In case it helps, for your reference: the author isn't, and never was, a random C++ dummy.


Your comments in this thread have broken the HN guidelines. Would you mind reviewing them and sticking to the rules when posting here? We'd appreciate it.

https://news.ycombinator.com/newsguidelines.html

Getting personal, bringing up whether someone read an article properly, making uncharitable interpretations of other comments, snarking, and posting in the flamewar style are all things they ask you not to do and which we're trying to avoid here. Not that your comments were anything like as bad as some that we see, but even in seed form these things have ill effects.


Hi dang, thanks for the heads up (and yes, I'll review them). Trouble I've been having is I feel I already tried to follow the tips in the guideline in my initial comments (see [1]), but it didn't work -- people still commented claiming the article is recommending the opposite of what it's actually claiming, meaning they clearly did not read all of it. I'm stuck on what to do. If I can't explicitly tell them to go read the article, then what is the proper response?

Edit: Someone deleted one of their replies here. Just wanted to say thanks, I read it and I think it'll be helpful moving forward.

[1] https://news.ycombinator.com/item?id=20430310


> If I can't explicitly tell them to go read the article, then what is the proper response?

"This isn't what the article says. For instance, in paragraph n, the author states 'x, y and z.'"


This is basically what I tried in the comment of mine that I linked to, but it only works if they make a specific claim that the article refutes. It doesn't really work in response to "this article is very bad"...


> You, who've surely carefully read the article, understood it in its entirety, and played around with the notion to get a feel for its upsides and downsides, very insightfully reduced it all down to "very bad advice" with zero elaboration. You find that compelling?

I’m pretty sure you did not try to follow the guideline in your initial comments. (see [0])

[0] https://news.ycombinator.com/item?id=20430270


Yes, not in that comment. I was at a loss on how to reply to a comment that just trashed the article as "very bad" and left it at that; I'm thinking maybe I shouldn't have replied at all. But I tried to do things a little better in the next one. I failed regardless, though, so that's why I'm hoping someone can offer a new approach.


> Just like its better-known counterpart const, volatile is a type modifier. It's intended to be used in conjunction with variables that are accessed and modified in different threads.

Simply not true. It has limited use with memory mapped I/O (although even there it misses necessary guarantees), but is not intended to work with threads.

> So all you have to do to make Gadget's Wait/Wakeup combo work is to qualify flag_ appropriately:

    class Gadget
    {
    public:
        ... as above ...
    private:
        volatile bool flag_;
    };
Is not correct, and will not work reliably.

I spent some time working with Andrei at Facebook, and he's a smart guy, but this article is wrong.

Don't do what he says here.

Volatile needs to go away.


He's describing the current state of affairs in that statement and providing background context, which was that volatile was seen as a solution to the memory barrier issue at that time, which we know to be incorrect now, but which was the closest approximation the C++ standard had to a half-solution at the time. That's NOT the point of the aricle, it's just background context for 2001 readers. There's a whole article after that which does not tell you to use volatile that way, and the entire reason I posted it was that part. Did you read past that paragraph at all, till the end? Did you understand what the article was actually trying to tell you, or did you just try to find a code snippet that didn't look right without bothering to read the full article? Did you see he literally advises you in the end: "DON'T use volatile directly with primitive types"? The entire point of the article is to tell you about a use case that is explicitly not the one you're imagining.


The paragraph where he claims that volatile works?


The paragraph where he says "DON'T use volatile directly with primitive types".


Despite that it was written by Alexandrescu, I can say that without a doubt this 2001 article doesn't represent the current state of thought around MT programming and volatile. I'd think of it more as a historical artifact than anything.


The fact that it doesn't doesn't mean it shouldn't be. It's a damn useful method, it just didn't become popular.


It’s completely broken. One of the modern Meyers books even has a chapter on not using volatile in the Dr. Dobbs article manner.

When the article was written, there was no real alternative, and volatile accidentally worked nicely on certain architectures. It failed on others. It absolutely was never designed to do what you’re trying to defend. It’s always been non-portable, implementation and architecture behavior on how it handled memory read/write barriers. Now that there’s proper ways to do barriers portably, the volatile approach is terrible advice.

C++ 11 addressed this all in a proper manner, after much research and many papers on the matter. Since then, for major compilers on major architectures, the new C++11 features have been implemented correctly. Volatile has zero use for correct multi threading code. It only has use for memory mapped hardware from a single properly synchronized thread.

Your article, as people keep telling you but you seem unable to accept it, is wrong. It’s now absolutely not portable, it’s inherently broken, and leads to undefined, hard to debug, terrible behavior for threading issues.

Go dig up the backstory on how C++11 got its threading model and dig up the More Effective C++ chapter on it to learn why your article is bad.


It sounds like you don't get what the article's point is. The article is NOT using volatile as a barrier mechanism. It's using it as a compiler-enforced type annotation, which you strip away before accessing the variable of interest. It sounds like absolutely nobody here is willing to read the article because they think they already know everything the article could possibly be saying. Fine, I give up, you win. I've summarized it here for you.

The idea is this you can use volatile like below. It's pretty self-explanatory. Now can you look through this code and tell me where you see such a horrifying lack of memory barriers and undefined behavior? (And don't point out something irrelevant like how I didn't delete the copy constructor.)

  #include <mutex>

  template<class T>
  class lock_ptr
  {
      T *p;
  public:
      ~lock_ptr() { this->p->m.unlock(); }
      lock_ptr(volatile T *p) : p(const_cast<T *>(p)) { this->p->m.lock(); }
      T *operator->() const { return p; }
  };

  class MyClass
  {
      int x;
  public:
      MyClass() : x() { }
      mutable std::mutex m;
      void bar() { ++x; }
      void foo() volatile { return lock_ptr<MyClass>(this)->bar(); }
  };

  void worker(volatile MyClass *p)  // called in multiple threads
  {
      p->foo();  // thread-safe, and compiles fine
      p->bar();  // thread-unsafe, and compile-time error
  }

  #include <future>

  int main()
  {
      MyClass c;
      auto a = std::async(worker, &c);
      auto b = std::async(worker, &c);
      a.wait();
      b.wait();
      return c.x;
  }


> It sounds like you don’t get what the articles point is.

Yes I do. It’s simply wrong. What it says about type annotation is correct, but has zero to do with threading because volatile has zero meaning for accesses from different threads. It then uses volatile to (incorrectly) build threading code. You seem to think volatile has some usefulness for threaded code; it does not. You think volatile adds benefit to your code above; it does not. The type annotation does not give you the ability to have compilers check race conditions for you - it works on some and will fail on others.

Add volatile to your bar function. Oops, got race conditions. Volatile is not protecting your code; properly using mutexes is. Requiring programmers to intersperse volatile as some type annotation makes code more error prone, not less. One still has to correctly do the hard parts, but now with added confusion, verbosity, and treading on undefined behavior.

I think you believe his claim “We can make the compiler check race conditions for us.” because you’re relying on the same claim compilers will check volatile in the manner your code above does. That’s undefined behavior, open to compiler whims. Good luck with that. There’s a reason C++ added the more nuanced ordering specifications - to handle the myriad ways some architectures worked (and to mirror discoveries made in academic literature on the topic that happened after this article was written).

This article is even mentioned in the proposal to remove volatile from C++ altogether http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p115.... I’ve known about this for some time, and hacking in type annotations like this adds no value; it simply makes a mess.

More errors from the article, which is why people should stop citing it:

First sentence:

“The volatile keyword was devised to prevent compiler optimizations that might render code incorrect in the presence of certain asynchronous events.”

This is simply wrong. The article goes on to try to make multithreaded code correct using volatile.

More quotes from the article that are simply wrong: “Although both C and C++ Standards are conspicuously silent when it comes to threads, they do make a little concession to multithreading, in the form of the volatile keyword.” Wrong; see Sutter quote below. “Just like its better-known counterpart const, volatile is a type modifier. It's intended to be used in conjunction with variables that are accessed and modified in different threads.” Wrong. See Sutter quote, and ISO standards. Volatile was never intended for this, so was never safe for doing this. “In spite of its simplicity, LockingPtr is a very useful aid in writing correct multithreaded code. You should define objects that are shared between threads as volatile” wrong on so many levels. The referenced code will break on many, many architectures. There is simply no defense to this.

The article has dozens more incorrect statements and code samples trying to make threadsafe code via volatile.

I’ve written articles on this. I’ve taught professional programmers this. I’ve designed high performance C++ multithreaded code for quite a while. It’s simply wrong, full stop.

Here’s a correct destruction of the Dobbs article by someone who gets it [1]. They, like you, were once misled by this article.

The money quote, from Herb Sutter “Please remember this: Standard ISO C/C++ volatile is useless for multithreaded programming. No argument otherwise holds water; at best the code may appear to work on some compilers/platforms”

I suspect you’ll still stick to the claim this article has value, given your insistence so far against so many people giving you correct advice. Good luck.

[1] https://sites.google.com/site/kjellhedstrom2/stay-away-from-...


> “The volatile keyword was devised to prevent compiler optimizations that might render code incorrect in the presence of certain asynchronous events.” > This is simply wrong.

Hardware interrupts and UNIX signals are the asynchronous events in question, and C's volatile is still useful in those contexts, where there is only a single thread of execution.


Volatile still doesn’t protect you there, whereas C++ 11 atomic do. If the item you mark volatile is not changed at the cpu and cache level atomically, you’re going to access torn variables. I’ve been there and am certain about it. And pre C++ 11, there is no way to portably find out which operations are architecture atomic, so it was impossible to write such code portably. C++ 11 fixed all that, and there’s no reason to use volatile for any of this any more: use atomics, possibly with fine grained barriers if needed and understood.

Here’s a compiler showing that your use fails on some systems:

http://www.keil.com/support/docs/2801.htm


You're right that just "volatile" isn't enough; typically you'd declare the variable sig_atomic_t to be portable, which makes the necessary guarantees since C89 so predates C++11. (It does not guarantee anything regarding access from multiple threads, of course.)

The problem with std::atomic<T> is that it may be implemented with a mutex, in which case it can deadlock in a signal handler. But as you say, you can check for that with is_lock_free.


Yep. And this thread illustrates why threading is hard, especially in C++ :)

Oh, and sig_atomic_t is not guaranteed thread-safe, only signal safe. The difference is when you move your code from a single cpu to dual cpu system it breaks. I ran across this some time ago moving stuff to an ESP32.

Atomic so far works best across the chips I’m poking at.


It shouldn't be.

The stuff the article recommends is straight up UB in modern C++. Volatile has never been specified to work properly with threads, but before C++11 when there was no alternative, some limited use in that context, preferably hidden away from the casual user, may have been acceptable. Recommending these techniques today, however, makes no sense.


It should be.

The stuff you're taking about is not the same stuff I'm talking about. There's nothing UB about the locking pointer pattern and how it uses volatile. Read the article in full. It has a specific thesis that is just as valid today as it was 20 years ago, and that thesis is NOT the 2001 malpractice you're talking about.


Yes the locking pointer pattern shown there is also UB because it is UB to define something as volatile and then cast away the volatile and use it, which is the core of that technique.

Yes, it's not UB in the race sense, because he is using mutxes everywhere there and just sort of overloading the volatile qualifier to catch member function calls outside the lock. In addition to being UB, it's weird - why not just escapsulate the object itself inside a class that only hands out access under control of a lock? That is, why have the volatile object passed in from the outside if you will never legally access the object?

The very premise of this article, that volatile is for concurrently modified objects across threads is false in modern C++ - and the very first example is a faulty use of volatile under the assumption that unguarded concurrent volatile access is safe.


> it is UB to define something as volatile

Can you point me to which part of the standard says that it's UB to cast away a volatile reference to a non-volatile object? See my example in [1] if you don't see why the object itself doesn't need to be volatile.

> it's weird

No, you're just not used to it. It's perfectly fine once you use it a bit. And regardless, there's quite a huge chasm between "it's completely wrong and undefined behavior" and "I don't like it, it's weird".

> why not just escapsulate the object itself inside a class that only hands out access under control of a lock?

That's a separate discussion. Right now we need to get the UB-ness claims out of the way. Once we agree it's correct in the first place then we can discuss whether it looks "weird" or what its use cases might be.

[1] https://news.ycombinator.com/item?id=20430882


> Can you point me to which part of the standard says that it's UB to cast away a volatile reference to a non-volatile object?

That is not UB, it's only UB if the object was defined volatile, which is what the article does, explicitly:

> You should define objects that are shared between threads as volatile and never use const_cast with them — always use LockingPtr automatic objects. Let's illustrate this with an example [Example goes on to define the object volatile]

> No, you're just not used to it. It's perfectly fine once you use it a bit. And regardless, there's quite a huge chasm between "it's completely wrong and undefined behavior" and "I don't like it, it's weird".

There might be a glimmer of something interesting in overloading the use of volatile on user-defined types as a second type of access control analogous to "const" but that you use for some other purpose, e.g., gating access to functions based on their tread-safety, or anything else really.

This article doesn't make a convincing case for it because the first example is UB, the second example is UB, it propagates the broken notion that volatile is useful for concurrent access to primitive types, it doesn't include any discussion of modern techniques like std::atomic<>, etc. Of course, that's no fault of the author, who wrote it 2001 when the well-defined way of doing things was 10 years away.

It's mostly a problem when people try to promote this, today, as an insightful view on volatile and multithreaded code. As a whole, it isn't and propagates various falsehoods that people have been trying to get rid of forever. What glimmer of an interesting point is in there regarding using volatile-qualified objects as a second level of access control orthogonal to const is washed out by the other problems.

> That's a separate discussion. Right now we need to get the UB-ness claims out of the way.

It's UB. Just admit that it's UB because the flag_ example does concurrent access to an object from different threads, at least one of which is a write, and the LockingPtr and follow-on examples are UB because they involve casting away volatile from a volatile-defined object.

If you can agree with that, then maybe you can present a related technique, different to the one in the article, which uses volatile in a useful way.


"Just admit" what? That applying volatile to an object and casting that away like with the flag_ example is UB? Yeah, I that's UB. It also wasn't the point of the article, and the use of volatile required for the technique the article is what actually matters, which isn't UB.

Can we step back for a second?

Go back to my top comment. Why did I even post this article in the first place? The point was that "volatile-correctness" is (basically) awesome, and it's hard to get something like it in other languages. This article is where the idea originated from, so I linked to it. i.e.: "There's something called volatile-correctness, which you can learn about by reading this article." The point was not "read this article and blindly sprinkle volatile across your codebase in exactly the same manner and you'll magically get thread safety".

What were you supposed to take away from the article? The idea of volatile-correctness, the idea that you can use a locking pointer to regulate multithreaded access to a class's methods. The idea that volatile acts as a helpful type annotation in this regard, independently of its well-known effects on primitive objects. You can apply it easily without ever marking objects as volatile, like I just showed you in that example. Yet somehow instead of actually extracting the fundamental concepts and ideas from the article, you and everyone else here are trashing it by insisting that the only possible way anyone can read that article is a naive verbatim copy-paste of its text from 2001 to 2019...? Why?

> If you can agree with that, then maybe you can present a related technique, different to the one in the article, which uses volatile in a useful way.

But omitting a couple volatiles doesn't make it a different technique! You just skip the incorrect uses of volatile. The technique is the same..


"Just admit" that the stuff in the article is UB, because you were going around badgering people to point out the UB, and because your last post demands: "Right now we need to get the UB-ness claims out of the way. Once we agree it's correct in the first place..."

So yes, let's get the UB claims out of the way - but agreeing that it's UB. Not just the flag_ example, but with the LockingPtr example that is the "point" of the article.

> you and everyone else here are trashing it

To be clear, I'm not really "trashing" the article. It's a relic of its time. I am trashing the idea that it's somehow a good introduction to any clever MT technique today.

> by insisting that the only possible way anyone can read that article is a naive verbatim copy-paste of its text from 2001 to 2019...? Why?

I explained it earlier: because the article has too many flaws to be a clean illustration of the technique. It starts with UB, ends with UB, makes wrong assertions about the purpose of volatile, etc.

Again, I agree there might be a glimmer of something here - but this article isn't the way to show it. The reaction you got was expected and fine. I can imagine a different article, written today, without the claims about the purpose of volatile, without the flag example, without the UB of casting away volatile from volatile objects, acknowledging the existence of std::atomic and how this technique complements or replaces it. That could be useful.

I looked at your example, and yes, I see the potential if you want to have an object with a thread-safe and non-threadsafe interface split like that (or really any split: you can overload volatile like that for any type of access control where you can cleanly divide the functions like that). It has the unfortunate side effect that volatile is not for that, and it implicitly makes all your members volatile and hence may pessimize code generation. I guess it doesn't matter that much if all the volatile functions follow the pattern of immediately shelling out to a non-volatile function though.


Maybe someone should write a more modern version of the article, I don't know.

I would also not expect it to pessimize code generation, since the final dereference should always be of a non-volatile pointer, though I suppose an optimizer bug might make it behave otherwise.

You can combine it with atomic, they're not substitutes. It could let you implement two versions of an algorithm: a lock-free multithreaded one, along with a single-threaded one that uses relaxed accesses (or even fully non-atomic accesses, had C++ allowed that). And then you'd auto-dispatch on the volatile modifier. The possibilities are really endless; I'm sure the limiting factor here is our imagination.

I've thought about the other types of split for a long time too, and I haven't managed to come up with other compelling use cases, even though I also feel they should exist. It would be interesting if someone could come up with one, because the ability to have commutative tags on a type seems really powerful.


“A tour of C++” (second edition) by Stroustrup is a great starting point.


C++ is deep and nuanced, so reading books will help structure your learning. I've found Scott Meyer's books to be great for starting out. Those will give you a fantastic foundation, from which you can dive deeper. Those and others have added significantly to my ability to write clean and maintainable software.

This SO post is a great guide for where to look: https://stackoverflow.com/questions/388242/the-definitive-c-...


I've been learning C++ for the first time starting in 2019.

I used Marc Gregoire's Professional C++ which has a version that was published last year and includes C++17, alongside Scott Meyer's line of books, and watching a variety of YouTube videos (e.g. Jason Turner, CPP talks...)


I second the recommendation of Professional C++. I am just a self-taught programmer, and to be quite honest, felt I was getting in a bit over my head by buying a book aimed at professionals. But I have found the material to be perfectly accessible even for someone without a CS degree, and I am now using C++ for my personal projects. I cannot recommend that book highly enough. Just my $.02


It's hard to name just a single resource and some HN mates have already listed excellent resources. I'd like to add a little bit that perhaps someone will find useful: - Books by B. Stroustrup are excellent if you already know programming. I wish for a new edition of "The C++ Programming Language" to be honest. I learnt a lot of useful "tips" from many books but only by reading his books was I able to see why C++ made certain choices. That really helped me "level up".


I recommend going through some recent CppCon presentations, especially these by committee members and people who implemented features/libraries -- Bjarne Stroustrup, Herb Sutter, Howard Hinnant, Chandler Carruth, etc.

Also have a look at Mike Acton's DOD videos -- he'll tell you that modern c++ (and even oop) is garbage, and he'll be right for his own particular case :)


Stroustrup -> Meyers -> Alexandrescu (optional but lots of very clever ideas)


Tour of C++, 2nd edition, from Bjarne Stroustroup is quite good to get to know all major updates how to write good C++20 code (the book goes through all relevant updates since C++11).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: