timer_freq.QuadPart /= 1000; // To convert frequency from 'ticks per second' to 'ticks per millisecond'
It also reduces accuracy — a second might no longer be 1000 ms, but maybe 1001 ms.
I think a better fix would be either to simply use doubles instead of floats or to at least offset the value to start of game — as long as a single game can't last months.
: QueryPerformanceFrequency documentation: https://msdn.microsoft.com/en-us/library/windows/desktop/ms6...
The solution is clearly to multiply by 1000 before performing floor division:
While I'll defend the idea of modifying QueryPerformanceFrequency result to achieve desired resolution (in this case, ticks per ms), you are absolutely right about the possibility of the value becoming several orders of magnitude smaller in the future. In this case multiplying the subtraction result by 1000 instead of dividing frequency is indeed far safer.
> While I'll defend the idea of modifying QueryPerformanceFrequency result to achieve desired resolution (in this case, ticks per ms)
Did a quick back-of-the-envelope calculation, and the suggested solution is off by 42 milliseconds per minute on a hypothetical 3 GHz system. It might not sound like much (~2.5 frames at 60 fps), but I've seen smaller errors cause massive issues when it comes to time.
It's better to do it right rather than to later on debug these often surprising issues we didn't have sufficient imagination for.
Unless one is looking for an amazing bug hunting war story, of course. :)
And as for returning the result - original code does return a float, so while I could also return a double it probably wouldn't make any difference, as I expect the game to store those results in floats too. Moreover, since the result is basically "session time", it is expected to last up to several hours - in which case floats are still fine.
Things like using a bit weird data types (floats instead of doubles) and writing functions thousands (!) of lines long.
Then asking me why did the reconstruction have a weird circular error around the origo...
Will generate more code warnings of course, but I believe it's a good tradeoff, since the game has a ton of them anyway and going through them all is something for another day.
On single CPU socket systems, Windows seems to usually compute QueryPerformanceCounter by simply shifting CPU RDTSC count right by 10 bits, effectively dividing the value by 1024. So exactly 3.0 GHz system would have 2929687 in QueryPerformanceFrequency return value.
On NUMA machines instead of RDTSC QueryPerformanceCounter uses HPET or APIC or whatever is available, because RDTSC is not HW synchronized between CPU sockets. I bet those HPET divisors will be in different divisor range.
No matter what clock source Windows might use, simply using doubles would have fixed the issue. No point to use floats in the first place.
edit: I see you did link to the API!
That said, I failed to observe this issue with TimeRollOVer enabled for the process. The only way I could reliably reproduce it on my machine (with low uptime) was to fake QPC results, like so.
I suspect VS2003 didn't enable SSE by default and was compiling the code using the x87 FPU, in which case all intermediate operations were done using 80-bit floats with only a final cast at the end.
On the other hand, using /fp:fast with VS2017 did give me similar results to what VS2003/VS2010 produces - so if I were to guess, VS2003 generates x87 math in a manner similar to fast math, while precise math breaks it?
Perhaps comparing assembly code would have given a definite answer, but since it is so obvious from source I don't think it is worth the effort.
scaled = raw * mult >> shift;
Would it be possible to avoid floats completely for the game's calculation?
I made a little uptime faker for Linux, it's not as clever as this one though as it's not altering the kernel timers directly https://www.anfractuosity.com/projects/uptime
There are many libraries for that, e.g. https://github.com/XMunkki/FixPointCS
Does the game have any kind of modding community behind it whatsoever?
I'll just have to remember to reboot before I play my favorite version of Taokaka, when I get the urge.
If you're lucky, your compiler can generate 64/128-bit math for your target if there's no native registers/ALU that wide. If not, it's not exactly rocket surgery to do this yourself. Or in dire cases, use an arbitrary precision library.
It's exactly 0.1 in base 3 and 0.4 in base 12, for example.