Yes. Running tests to validate your assumptions is a big part of robust software engineering. In this case the results were unsurprising but not uninteresting.
Testing your assumptions is something that you're supposed to do when you hit a wall, not when you're driving through a field.
But do you have enough information about how Redis accesses memory under the benchmark in question, combined with the OS page replacement strategy, combined with the characteristics of SSDs, to know the results beforehand? You can guess, for sure; but do you know?
If we all follow your approach, we'll never be surprised unless we get stuck; and if we know what we think we know as well as you seem to think we know it, we shouldn't get stuck in the first place. We should have assumed that we would have gotten stuck, and avoided it.
The article has relatively low value in terms of information content, but the mindset is to be commended. It should have given the author better intuitions about the 3 factors mentioned above. Modern, non-budget systems very seldom thrash; there's a younger generation coming along who've never experienced systems frozen in that way.
An analogy I can think of is testing if a stock Ford Fiesta can reach the speed of sound. You know what the engine is capable of, the environment it is operating in and the tires it is running on - you simply don't need to floor the accelerator to come to a conclusion.
That saying about picking ones battles comes to mind. The mindset is certainly of a sharp character, but what good is a knife without a hand to guide it? The map is not the territory but it does save a lot of time if used strategically.