> When the hell would you ever need to know that?
I find this such a strange perspective. You're writing code for a computer. Sometimes you need to estimate how quickly it should run, or else estimate how quickly it could run with optimal code. Surely it's obvious that knowing how fast your computer does stuff, at least within 3 orders of magnitude, is a necessary ingredient to make that estimate.
And yet in your mind that fundamental fact is "useless trivia." Very odd.
Anyway, my answer to the question would be, "a lot". You need to know the specifics of workload and cpu to be more precise.
The reason is that things today are "fast enough." These days most slowdowns aren't the result of the CPU not executing instructions fast enough. Other factors dominate, such as memory access patterns, network delays, and interfacing with other complex software such as databases.
Unless you are doing compute-heavy code, the speed of the CPU isn't much of a factor in estimating how fast the program will run.
I'm having trouble believing that people who think a CPU can do thousands of instructions per second will do well at this (or reasoning about the memory hierarchy).
You may as well ask any other sort of technical trivia question and figure the people that happen to carry around more random facts about tech are more likely to understand the bigger things that do matter. It isn't necessarily wrong it's a pretty obtuse way to make a judgment. Why not just ask them about the memory hierarchy or network delays or whatever directly?
I personally have no idea and it's the one computer I use most frequently.
That said, when I started using computers, my first machine ran at 897 Khz on an 8-bit CPU, and had 16k of RAM. I was 11 years old, and this was a pretty standard machine for the time, unless you had real money (and even there, on the top end - not counting minicomputers or mainframes - you'd be lucky to break 8 MHz and a meg of RAM).
But I know I am of a different era. And honestly, I've stopped caring about CPU speeds too, because lately they don't change much (top end is about 4-5 GHz per core; servers can have around 32 cores - though that is increasing too).
What should be cared about is memory usage per process, and how the software you are using (or which multiple people are using) delegates that out. For instance, with PHP (not sure with PHP 7) you have one process per user, and depending on how much memory those processes take, will ultimately lead to an upper limit on the number of users that can be served at one time. In that case, knowing your memory usage and constraints of the server could be very important (there are a ton of other factors to consider as well, I know).