Hacker News new | past | comments | ask | show | jobs | submit login

Despite that it was written by Alexandrescu, I can say that without a doubt this 2001 article doesn't represent the current state of thought around MT programming and volatile. I'd think of it more as a historical artifact than anything.



The fact that it doesn't doesn't mean it shouldn't be. It's a damn useful method, it just didn't become popular.


It’s completely broken. One of the modern Meyers books even has a chapter on not using volatile in the Dr. Dobbs article manner.

When the article was written, there was no real alternative, and volatile accidentally worked nicely on certain architectures. It failed on others. It absolutely was never designed to do what you’re trying to defend. It’s always been non-portable, implementation and architecture behavior on how it handled memory read/write barriers. Now that there’s proper ways to do barriers portably, the volatile approach is terrible advice.

C++ 11 addressed this all in a proper manner, after much research and many papers on the matter. Since then, for major compilers on major architectures, the new C++11 features have been implemented correctly. Volatile has zero use for correct multi threading code. It only has use for memory mapped hardware from a single properly synchronized thread.

Your article, as people keep telling you but you seem unable to accept it, is wrong. It’s now absolutely not portable, it’s inherently broken, and leads to undefined, hard to debug, terrible behavior for threading issues.

Go dig up the backstory on how C++11 got its threading model and dig up the More Effective C++ chapter on it to learn why your article is bad.


It sounds like you don't get what the article's point is. The article is NOT using volatile as a barrier mechanism. It's using it as a compiler-enforced type annotation, which you strip away before accessing the variable of interest. It sounds like absolutely nobody here is willing to read the article because they think they already know everything the article could possibly be saying. Fine, I give up, you win. I've summarized it here for you.

The idea is this you can use volatile like below. It's pretty self-explanatory. Now can you look through this code and tell me where you see such a horrifying lack of memory barriers and undefined behavior? (And don't point out something irrelevant like how I didn't delete the copy constructor.)

  #include <mutex>

  template<class T>
  class lock_ptr
  {
      T *p;
  public:
      ~lock_ptr() { this->p->m.unlock(); }
      lock_ptr(volatile T *p) : p(const_cast<T *>(p)) { this->p->m.lock(); }
      T *operator->() const { return p; }
  };

  class MyClass
  {
      int x;
  public:
      MyClass() : x() { }
      mutable std::mutex m;
      void bar() { ++x; }
      void foo() volatile { return lock_ptr<MyClass>(this)->bar(); }
  };

  void worker(volatile MyClass *p)  // called in multiple threads
  {
      p->foo();  // thread-safe, and compiles fine
      p->bar();  // thread-unsafe, and compile-time error
  }

  #include <future>

  int main()
  {
      MyClass c;
      auto a = std::async(worker, &c);
      auto b = std::async(worker, &c);
      a.wait();
      b.wait();
      return c.x;
  }


> It sounds like you don’t get what the articles point is.

Yes I do. It’s simply wrong. What it says about type annotation is correct, but has zero to do with threading because volatile has zero meaning for accesses from different threads. It then uses volatile to (incorrectly) build threading code. You seem to think volatile has some usefulness for threaded code; it does not. You think volatile adds benefit to your code above; it does not. The type annotation does not give you the ability to have compilers check race conditions for you - it works on some and will fail on others.

Add volatile to your bar function. Oops, got race conditions. Volatile is not protecting your code; properly using mutexes is. Requiring programmers to intersperse volatile as some type annotation makes code more error prone, not less. One still has to correctly do the hard parts, but now with added confusion, verbosity, and treading on undefined behavior.

I think you believe his claim “We can make the compiler check race conditions for us.” because you’re relying on the same claim compilers will check volatile in the manner your code above does. That’s undefined behavior, open to compiler whims. Good luck with that. There’s a reason C++ added the more nuanced ordering specifications - to handle the myriad ways some architectures worked (and to mirror discoveries made in academic literature on the topic that happened after this article was written).

This article is even mentioned in the proposal to remove volatile from C++ altogether http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p115.... I’ve known about this for some time, and hacking in type annotations like this adds no value; it simply makes a mess.

More errors from the article, which is why people should stop citing it:

First sentence:

“The volatile keyword was devised to prevent compiler optimizations that might render code incorrect in the presence of certain asynchronous events.”

This is simply wrong. The article goes on to try to make multithreaded code correct using volatile.

More quotes from the article that are simply wrong: “Although both C and C++ Standards are conspicuously silent when it comes to threads, they do make a little concession to multithreading, in the form of the volatile keyword.” Wrong; see Sutter quote below. “Just like its better-known counterpart const, volatile is a type modifier. It's intended to be used in conjunction with variables that are accessed and modified in different threads.” Wrong. See Sutter quote, and ISO standards. Volatile was never intended for this, so was never safe for doing this. “In spite of its simplicity, LockingPtr is a very useful aid in writing correct multithreaded code. You should define objects that are shared between threads as volatile” wrong on so many levels. The referenced code will break on many, many architectures. There is simply no defense to this.

The article has dozens more incorrect statements and code samples trying to make threadsafe code via volatile.

I’ve written articles on this. I’ve taught professional programmers this. I’ve designed high performance C++ multithreaded code for quite a while. It’s simply wrong, full stop.

Here’s a correct destruction of the Dobbs article by someone who gets it [1]. They, like you, were once misled by this article.

The money quote, from Herb Sutter “Please remember this: Standard ISO C/C++ volatile is useless for multithreaded programming. No argument otherwise holds water; at best the code may appear to work on some compilers/platforms”

I suspect you’ll still stick to the claim this article has value, given your insistence so far against so many people giving you correct advice. Good luck.

[1] https://sites.google.com/site/kjellhedstrom2/stay-away-from-...


> “The volatile keyword was devised to prevent compiler optimizations that might render code incorrect in the presence of certain asynchronous events.” > This is simply wrong.

Hardware interrupts and UNIX signals are the asynchronous events in question, and C's volatile is still useful in those contexts, where there is only a single thread of execution.


Volatile still doesn’t protect you there, whereas C++ 11 atomic do. If the item you mark volatile is not changed at the cpu and cache level atomically, you’re going to access torn variables. I’ve been there and am certain about it. And pre C++ 11, there is no way to portably find out which operations are architecture atomic, so it was impossible to write such code portably. C++ 11 fixed all that, and there’s no reason to use volatile for any of this any more: use atomics, possibly with fine grained barriers if needed and understood.

Here’s a compiler showing that your use fails on some systems:

http://www.keil.com/support/docs/2801.htm


You're right that just "volatile" isn't enough; typically you'd declare the variable sig_atomic_t to be portable, which makes the necessary guarantees since C89 so predates C++11. (It does not guarantee anything regarding access from multiple threads, of course.)

The problem with std::atomic<T> is that it may be implemented with a mutex, in which case it can deadlock in a signal handler. But as you say, you can check for that with is_lock_free.


Yep. And this thread illustrates why threading is hard, especially in C++ :)

Oh, and sig_atomic_t is not guaranteed thread-safe, only signal safe. The difference is when you move your code from a single cpu to dual cpu system it breaks. I ran across this some time ago moving stuff to an ESP32.

Atomic so far works best across the chips I’m poking at.


It shouldn't be.

The stuff the article recommends is straight up UB in modern C++. Volatile has never been specified to work properly with threads, but before C++11 when there was no alternative, some limited use in that context, preferably hidden away from the casual user, may have been acceptable. Recommending these techniques today, however, makes no sense.


It should be.

The stuff you're taking about is not the same stuff I'm talking about. There's nothing UB about the locking pointer pattern and how it uses volatile. Read the article in full. It has a specific thesis that is just as valid today as it was 20 years ago, and that thesis is NOT the 2001 malpractice you're talking about.


Yes the locking pointer pattern shown there is also UB because it is UB to define something as volatile and then cast away the volatile and use it, which is the core of that technique.

Yes, it's not UB in the race sense, because he is using mutxes everywhere there and just sort of overloading the volatile qualifier to catch member function calls outside the lock. In addition to being UB, it's weird - why not just escapsulate the object itself inside a class that only hands out access under control of a lock? That is, why have the volatile object passed in from the outside if you will never legally access the object?

The very premise of this article, that volatile is for concurrently modified objects across threads is false in modern C++ - and the very first example is a faulty use of volatile under the assumption that unguarded concurrent volatile access is safe.


> it is UB to define something as volatile

Can you point me to which part of the standard says that it's UB to cast away a volatile reference to a non-volatile object? See my example in [1] if you don't see why the object itself doesn't need to be volatile.

> it's weird

No, you're just not used to it. It's perfectly fine once you use it a bit. And regardless, there's quite a huge chasm between "it's completely wrong and undefined behavior" and "I don't like it, it's weird".

> why not just escapsulate the object itself inside a class that only hands out access under control of a lock?

That's a separate discussion. Right now we need to get the UB-ness claims out of the way. Once we agree it's correct in the first place then we can discuss whether it looks "weird" or what its use cases might be.

[1] https://news.ycombinator.com/item?id=20430882


> Can you point me to which part of the standard says that it's UB to cast away a volatile reference to a non-volatile object?

That is not UB, it's only UB if the object was defined volatile, which is what the article does, explicitly:

> You should define objects that are shared between threads as volatile and never use const_cast with them — always use LockingPtr automatic objects. Let's illustrate this with an example [Example goes on to define the object volatile]

> No, you're just not used to it. It's perfectly fine once you use it a bit. And regardless, there's quite a huge chasm between "it's completely wrong and undefined behavior" and "I don't like it, it's weird".

There might be a glimmer of something interesting in overloading the use of volatile on user-defined types as a second type of access control analogous to "const" but that you use for some other purpose, e.g., gating access to functions based on their tread-safety, or anything else really.

This article doesn't make a convincing case for it because the first example is UB, the second example is UB, it propagates the broken notion that volatile is useful for concurrent access to primitive types, it doesn't include any discussion of modern techniques like std::atomic<>, etc. Of course, that's no fault of the author, who wrote it 2001 when the well-defined way of doing things was 10 years away.

It's mostly a problem when people try to promote this, today, as an insightful view on volatile and multithreaded code. As a whole, it isn't and propagates various falsehoods that people have been trying to get rid of forever. What glimmer of an interesting point is in there regarding using volatile-qualified objects as a second level of access control orthogonal to const is washed out by the other problems.

> That's a separate discussion. Right now we need to get the UB-ness claims out of the way.

It's UB. Just admit that it's UB because the flag_ example does concurrent access to an object from different threads, at least one of which is a write, and the LockingPtr and follow-on examples are UB because they involve casting away volatile from a volatile-defined object.

If you can agree with that, then maybe you can present a related technique, different to the one in the article, which uses volatile in a useful way.


"Just admit" what? That applying volatile to an object and casting that away like with the flag_ example is UB? Yeah, I that's UB. It also wasn't the point of the article, and the use of volatile required for the technique the article is what actually matters, which isn't UB.

Can we step back for a second?

Go back to my top comment. Why did I even post this article in the first place? The point was that "volatile-correctness" is (basically) awesome, and it's hard to get something like it in other languages. This article is where the idea originated from, so I linked to it. i.e.: "There's something called volatile-correctness, which you can learn about by reading this article." The point was not "read this article and blindly sprinkle volatile across your codebase in exactly the same manner and you'll magically get thread safety".

What were you supposed to take away from the article? The idea of volatile-correctness, the idea that you can use a locking pointer to regulate multithreaded access to a class's methods. The idea that volatile acts as a helpful type annotation in this regard, independently of its well-known effects on primitive objects. You can apply it easily without ever marking objects as volatile, like I just showed you in that example. Yet somehow instead of actually extracting the fundamental concepts and ideas from the article, you and everyone else here are trashing it by insisting that the only possible way anyone can read that article is a naive verbatim copy-paste of its text from 2001 to 2019...? Why?

> If you can agree with that, then maybe you can present a related technique, different to the one in the article, which uses volatile in a useful way.

But omitting a couple volatiles doesn't make it a different technique! You just skip the incorrect uses of volatile. The technique is the same..


"Just admit" that the stuff in the article is UB, because you were going around badgering people to point out the UB, and because your last post demands: "Right now we need to get the UB-ness claims out of the way. Once we agree it's correct in the first place..."

So yes, let's get the UB claims out of the way - but agreeing that it's UB. Not just the flag_ example, but with the LockingPtr example that is the "point" of the article.

> you and everyone else here are trashing it

To be clear, I'm not really "trashing" the article. It's a relic of its time. I am trashing the idea that it's somehow a good introduction to any clever MT technique today.

> by insisting that the only possible way anyone can read that article is a naive verbatim copy-paste of its text from 2001 to 2019...? Why?

I explained it earlier: because the article has too many flaws to be a clean illustration of the technique. It starts with UB, ends with UB, makes wrong assertions about the purpose of volatile, etc.

Again, I agree there might be a glimmer of something here - but this article isn't the way to show it. The reaction you got was expected and fine. I can imagine a different article, written today, without the claims about the purpose of volatile, without the flag example, without the UB of casting away volatile from volatile objects, acknowledging the existence of std::atomic and how this technique complements or replaces it. That could be useful.

I looked at your example, and yes, I see the potential if you want to have an object with a thread-safe and non-threadsafe interface split like that (or really any split: you can overload volatile like that for any type of access control where you can cleanly divide the functions like that). It has the unfortunate side effect that volatile is not for that, and it implicitly makes all your members volatile and hence may pessimize code generation. I guess it doesn't matter that much if all the volatile functions follow the pattern of immediately shelling out to a non-volatile function though.


Maybe someone should write a more modern version of the article, I don't know.

I would also not expect it to pessimize code generation, since the final dereference should always be of a non-volatile pointer, though I suppose an optimizer bug might make it behave otherwise.

You can combine it with atomic, they're not substitutes. It could let you implement two versions of an algorithm: a lock-free multithreaded one, along with a single-threaded one that uses relaxed accesses (or even fully non-atomic accesses, had C++ allowed that). And then you'd auto-dispatch on the volatile modifier. The possibilities are really endless; I'm sure the limiting factor here is our imagination.

I've thought about the other types of split for a long time too, and I haven't managed to come up with other compelling use cases, even though I also feel they should exist. It would be interesting if someone could come up with one, because the ability to have commutative tags on a type seems really powerful.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: