I'm currently working on my PhD with an emphasis on formal verification (more specifically proofs of correctness) in the weak memory setting.
Some thoughts as I read through:
> Any number of compiler optimizations. For example, our model currently does not account for compiler optimizations that hoist identical stores from both branches of an if statement to precede that statement.
This has until very recently (POPL 2017) been a very serious problem for formal weak memory models. There is only one that I'm aware of for C++11 that avoids the "thin air" read problem that generally arises as a consequence of supporting these types of optimizations. See [1] for more.
> The memory model must be compatible with the hardware that the Linux kernel runs on. Although the memory model can be (and is) looser than any given instance of hardware, it absolutely must not be more strict. In other words, the memory model must in some sense provide the least common denominator of the guarantees of all memory models of all CPU families that run the Linux kernel.
This sort of "lower bound" on bad memory behavior is normally why people target C11 or C++11, but if the author's happen to see this comment I would recommend checking out the work of Karl Crary at CMU [2]. In his model he has omitted the traditional total order on writes present on all modern architectures with the intent of future proofing his semantics.
As a further plug for his semantics, it is not "axiomatic" in the sense of the traditional memory models. Importantly it introduces the notion of programmer specified orders which fits more closely with programmer intuition (by my reckoning) when writing the algorithms than say, the C++11 approach of memory orderings for particular cross thread operations.
In any event an enjoyable read that I will have to come back to later!
I'm currently working on my PhD with an emphasis on formal verification (more specifically proofs of correctness) in the weak memory setting.
Some thoughts as I read through:
> Any number of compiler optimizations. For example, our model currently does not account for compiler optimizations that hoist identical stores from both branches of an if statement to precede that statement.
This has until very recently (POPL 2017) been a very serious problem for formal weak memory models. There is only one that I'm aware of for C++11 that avoids the "thin air" read problem that generally arises as a consequence of supporting these types of optimizations. See [1] for more.
> The memory model must be compatible with the hardware that the Linux kernel runs on. Although the memory model can be (and is) looser than any given instance of hardware, it absolutely must not be more strict. In other words, the memory model must in some sense provide the least common denominator of the guarantees of all memory models of all CPU families that run the Linux kernel.
This sort of "lower bound" on bad memory behavior is normally why people target C11 or C++11, but if the author's happen to see this comment I would recommend checking out the work of Karl Crary at CMU [2]. In his model he has omitted the traditional total order on writes present on all modern architectures with the intent of future proofing his semantics.
As a further plug for his semantics, it is not "axiomatic" in the sense of the traditional memory models. Importantly it introduces the notion of programmer specified orders which fits more closely with programmer intuition (by my reckoning) when writing the algorithms than say, the C++11 approach of memory orderings for particular cross thread operations.
In any event an enjoyable read that I will have to come back to later!
[1] https://people.mpi-sws.org/~orilahav/papers/main.pdf
[2] https://www.cs.cmu.edu/~crary/papers/2015/rmc.pdf