> Essentially I don’t think it’s really enough to consider the asymptotic complexity of algorithms to compare practical runtimes. You need to know what the smaller-order / constant terms are.
Nitpick: even if you know the constant terms, that would only tell you the number of operations required to execute the code. That's not enough to compare runtimes, because runtimes are also impacted by other factors like branch prediction rate and cache hit rate. You need to consider those factors as well, in order to compare runtimes.
At some point, articles become completely unreadable when they try to address every possible nitpick. Some tangents are better left unexplored in a blog post.
This is why benchmarking can be useful. I’m not sure why in a field where the materials cost of testing an idea is zero it’s done so much less frequently than theorizing.
Real costs are certainly more than zero, but it’s almost entirely time. “Materials cost” is what I rounded off to zero. Like if you want to benchmark an idea in aviation, you’re going to have to burn some fuel.
Maybe because that commits you to a specific implementation with all of its particular choices instead of being able to compare the class of problems and solutions.
Nitpick: even if you know the constant terms, that would only tell you the number of operations required to execute the code. That's not enough to compare runtimes, because runtimes are also impacted by other factors like branch prediction rate and cache hit rate. You need to consider those factors as well, in order to compare runtimes.
At some point, articles become completely unreadable when they try to address every possible nitpick. Some tangents are better left unexplored in a blog post.