This paper is very enlightening, but I don’t think that an algorithms class should address these sorts of problems. When I went to school (15 years ago) We had a separate machine architecture class series that went over these sorts of practical concerns. We had labs where we would time memory accesses and test aliased variables and manually decompile simple functions.
In my humble opinion as a volunteer educator, algorithms are already very complicated and students being introduced to them don’t need to concern themselves with this stuff at the same time.
> We can also examine what happens when we write data repeatedly.
I wonder whether an optimizing compiler can detect and optimize the sequential writes of the same values, possibly by replacing them with something like a blitter. It would be interesting to see the difference in speed if the data to be written was random, rather than the same value (1.0) every time.
In my humble opinion as a volunteer educator, algorithms are already very complicated and students being introduced to them don’t need to concern themselves with this stuff at the same time.