The basic issue is consumer kernels are all general-purpose and have to work on laptops and mobile devices, so a whole lot of options are set that make no sense on workstations, and you end up throttling when it isn't necessary, i.e. you're not on battery and you have plenty of fan power or a water loop to prevent actual damage.
Heck, in many cases, you may even need to disable throttling in the BIOS, which is doubly disappointing because a desktop BIOS definitely doesn't need settings that work on laptops and mobile.
For you, probably, but companies in Asia (due to shareholder and regulatory reasons) needs to power manage their systems hard and only will disable power management when really required. That option exists not for you but for them.
Because of the large and dense population centers, and therefore power requirements, governments require aggressive power management settings on computers??
> If ATLAS's configure detects that CPU throttling is enabled, it will kill itself.
a little dramatic but ok
On modern system, the peak speed is usually higher than the consistent speed, so if you're running software with short CPU peaks, you can have better results with throttling than not. Also, if your software doesn't use all the cores, you may be able to run those it does use faster than the consistent speed.
Power savings are still useful even when the power comes from the wall, and thermal savings often lead to less noise and can have secondary benefits in conditioned environments. (Of course, there are environments where computers make a good replacement for electrical resistance heaters)
However, this type of static linking is very weird in today's architectures, specially when considering that an application can run anywhere, from a light bulb to a top notch server hosted in the cloud and elsewhere. It specially bugs me to think that people leading some of these projects are so stubborn to the point of not accepting the new reality imposed by today's software and system architectures.
This fact allows hardware RNG from thermal emissions on consumer grade CPUs to output at least 3 megabits of 'entropy' per second. The same as 3,000,000 coin flips.
I may be misunderstanding what is going on, though.
$ type -a time
time is a shell keyword
time is /usr/bin/time
time is /bin/time
$ time -V
-V: command not found
$ command time -V
GNU time 1.7
My personal laptop's CPU is Intel i7-9750H. I always run it with turbo boost disabled for predictable, sustained performance. Turbo boost is weird as in it can lead to a sub-par user experience (lag) when your processor throttles.
Interestingly enough I can simultaneously compile a ~250k LOC purescript project & attend a google meets call with TB disabled but not with it enabled. (This is on MBP-16)
Intel Macbooks are also notoriously thermally limited so I wonder if the settings you're using are just causing the CPU to run hotter than it would at stock.
Does your laptop have such atrocious cooling that it can't sustain frequencies higher than base? When my laptop undergoes thermal throttling (ie. hits 95C), the frequency is still above the base frequency, so I'm still getting more performance than if turbo was disabled.
edit: tested with cinebench. On my laptop with intel cpu the sustained all-core turbo frequency is 45% higher than the base frequency. This is with throttlestop enabled, otherwise TDP/tau throttling kicks in first rather than thermal throttling.
At base clock, I see no visible loss in performance and a much quieter workstation.
The performance loss in compilation is offset by continuously compiling (watch) which I can do with TB disabled but not with TB enabled.
For gaming, I limit the turbo boost to 3.2Ghz and get a more consistent performance with no sudden drops.
On Windows, instead of disabling TB, I just limit the CPU TDP at 40W via throttle-stop.
TB just makes the laptop loud and hot even when the added performance is not required.
I was using the laptop to mine verium during this past winter in the US south to keep my hands and arms warm in lieu of running HVAC more. I never bothered to switch it back to normal or "quiet" mode.
I take my first statement back, running blender benchmarks or some other benchmark that tries to tap out the GPU tends to make a bit of noise, but normal gaming and stuff like handbrake or whatever I can't hear anything.
My thoughts on this are that we could do with a better CPU usage measurement, which the author of this article found to be cycles.
All instructions do not have the same thermal cost, and even though my laptop's CPU is able to stand at 3.99GHz when running stress tests whereas its base clock is 2.59GHz, I doubt it would be able to do the same if it was running AVX2 instructions (or it does have quite impressive thermals for a laptop).
Also, does disabling throtling disable dynamic frequency scaling of the CPU's cache too ?
> “How much work did the CPU have to do in order to complete this task?”
> But what time(1) tells me is:
> “How long did the CPU work to complete this task?”
Yes, exactly. It's essentially impossible to accurately time things, especially on a laptop. To answer the second question, the best bet is to use rdtscp() when micro-benchmarking, but even that has pitfalls.