But when that list is empty, doing the right kind of CPU sleep is totally worthwhile of course.
Dragonfly actually removed it two years ago: http://lists.dragonflybsd.org/pipermail/commits/2016-August/...
- Pre-zeroing a page only takes 80ns on a modern cpu. vm_fault overhead
in general is ~at least 1 microscond.
- Pre-zeroing a page leads to a cold-cache case on-use, forcing the fault
source (e.g. a userland program) to actually get the data from main
memory in its likely immediate use of the faulted page, reducing
- Zeroing the page at fault-time is actually more optimal because it does
not require any reading of dynamic ram and leaves the cache hot.
- Multiple synth and build tests show that active idle-time zeroing of
pages actually reduces performance somewhat and incidental allocations
of already-zerod pages (from page-table tear-downs) do not affect
performance in any meaningful way.
or physically pull the RAM out, keeping it cold with LN2, and stick it into devices designed for reading it all out.
The starting point is that there is stale, useless data in ram. Then a usermode program requests an empty page, and usually when they do this they want to immediately use it.(1) Using non-polluting writes, you have to use main memory bandwidth both for clearing the page, and also for bringing the page back to cache immediately afterwards when the program uses it.
Using writes that just allocate new, cleared dirty lines in cache (like the AMD CLZERO), it avoids both the write (which will happen later, when the lines are evicted from cache, probably after the lines have been used by the program), and the read because the lines are now all in the cache.
(1) And on Linux this is trivially true, because Linux only allocates and clears the page when it is first accessed.
Your system: memory is released, kernel clears it in the background, wasting write bandwidth (which might not matter for anything except power if the system was idle at the time), and when the user-mode program starts using it, it will start writing and every new line they write to will trigger a spurious read.
Modern Linux: memory is released, kernel lets it lie, not using any power or bandwidth to do anything to it, until an user-mode program allocates it and touches the page. Then the kernel picks up the page, writes the entire page with 0:es using whatever idiom on that CPU allows it to just allocate the page in cache without reading it from the RAM. This is really fast, faster than a single memory fetch. The user-mode program can then directly use it without having to fetch anything from DRAM.
Nit: it's faster than a page fault (so fault + zeroing is pretty much the same as just fault).
According to the recent-ish Latency Numbers, a main memory reference is ~100ns (variable by arch and local v remote DRAM) which is about the same as zeroing a page, at least with respect to the dragonfly numbers I posted above.
Any idea why that's necessary ? So it's preempted by a IDLE process.
Also, do not conflate the user-thread idle priority class with the (non-)priorities of Idle Threads.
The idle task (when there's nothing to run) doesn't have a priority at all, it just gets run when nothing else is available. the "idle" priority is an actual priority, for when there's no higher-priority task to run. The ZPT is prioritized below "idle", so super-duper-low priority.
If you look at this image <img src="http://www.retrocomputingtasmania.com/home/projects/burrough...
its a 2 CPU system, in the lower left is the "B Meatball" idle pattern.
A cursory search led me to this project that blinks the power led according to the disk activity, which is not far from your idea (replacing the disk activity by a composite of system load for instance?)
Eons ago I had a small program on Linux that blinked the otherwise-almost-useless scroll/numlock leds based on network i/o activity. It was fairly cool.
I just looked at several severs I have access to at several operators, different physical hardware and distros, none of them has a LED defined for anything other than scroll/numlock
As I said elsewhere, none of the (recent) servers I have access to, most of them Intel based, several distros, etc have anything else than LEDs defined for numlock/scrolllock under /sys/class/leds/, where I'd expect to find the CPU activity LED? What am I missing here?
So if numlock and scroll lock are the LEDs available on your hardware, you can repurpose one of those as the CPU activity LED.
I prefer the approach of improving energy efficiency (short to mid term time scale) and investing in greener energy sources (mid to long term) to bide us time until nuclear fusion becomes viable and thus electricity consumption no longer becomes a harmful process.
We shouldn't punish people less fortunate than ourselves for our own gluttony - which is all that raising prices would do.
Raising efficiency probably won't help at all due to Jevon's paradox: https://en.wikipedia.org/wiki/Jevons_paradox
In economics, the Jevons paradox (/ˈdʒɛvənz/; sometimes Jevons effect) occurs when technological progress increases the efficiency with which a resource is used (reducing the amount necessary for any one use), but the rate of consumption of that resource rises due to increasing demand. The Jevons paradox is perhaps the most widely known paradox in environmental economics. However, governments and environmentalists generally assume that efficiency gains will lower resource consumption, ignoring the possibility of the paradox arising.
In 1865, the English economist William Stanley Jevons observed that technological improvements that increased the efficiency of coal-use led to the increased consumption of coal in a wide range of industries. He argued that, contrary to common intuition, technological progress could not be relied upon to reduce fuel consumption.
As an anecdotal example, we replaced one 60w incandescent bulb in our bathroom with 4 x 5w LED spotlight bulbs. This is both an increase in light, and a reduction in energy usage, even though it doesn't reflect the full efficiency gains of the LEDs.
The gaming industry demonstrates this the most clearly but it's definitely present in general purpose computing as well. eg when Windows XP was first released (pre-service packs) it required twice the hardware specifications of Windows 2000 yet offered little functional difference (read: actual real world stuff that could be done on it) aside themeing.
Thankfully that trend with Windows has reversed somewhat but it's still ever-present with desktop software and their movement towards using web-based technologies.
I think the gaming industry actually also demonstrates the opposite the most clearly.
I'm often blown away by how efficient some games are, and how well they take advantage of advances in hardware.
I look at something like World of Warcraft, a 14 year old game that has trouble running on my desktop computer despite its graphics being... limited to say the least. And then I look at Breath of the Wild. A patently stunning game.
And then I remember which one of the two is a mobile game.
compute is EASY to jevon =)
Saying efficiency leads to increased usage is not the same thing as saying increased usage balances efficiency 100%.
It reminds me of people who argued that airbags and ABS lead to people taking more risk when driving. That may be a real effect, but nevertheless, casualties have declined.
The question of datacentres is another matter though. But they only respond to demand from people like ourselves who lease computing time from them / place our own hardware in their racks.
* I'm not counting mining (bitcoin et al), home servers, media centres, etc. where it's more likely running costs will be factored into the buying decision. However these are uncommon compared to other hardware like laptops, desktop PCs, games consoles, mobile phones, TVs, fridges, kettles, etc.
> People definitely buy more and bigger TVs if their cost of operation is cheaper.
In all my years I have never, ever, heard anyone say that until now. People buy bigger TVs because it's cheaper to buy, or because they offer better features (smart, 3D, 4k, curved, etc) or because they move house and their new room is bigger so want something proportional. Or even just because they're used to their old TV and want a visible upgrade.
However I have never heard anyone say "this TV is bigger than my old one because it's cheaper to run".
(obviously I'm not saying "nobody in the history of consumers have said what you claim", but I would be astounded if that was a normal buying trend. More likely it's a niche quirk you've exampled)
> They also buy more power-hungry phones – energy efficiency increases in general haven’t turned into energy savings!
From what I gather, the trend for power-hungry phones have outstripped energy savings anyway. It's improvements in battery technology that have enabled people to continually upgrade. So I don't really see the cost of electricity vs energy consumption improvements being a deciding factor here.
> More efficient use of energy also simply makes people have less incentives to conserve it (switching off lights and appliances when not used, turning down heating/AC, driving less, and so on).
That's a fair point. It would be hard to judge just how significant that impact is though. But I definitely agree there will be people out there who do leave lights on because they're cheap to run.
I do grant you that energy efficiency wins in many fixed appliances (washing machines, fridges, etc) do probably transfer directly to reduced energy usage by those appliances – but remember that energy is fungible, and reduced use here may and often will transfer to increased use there.
The improved efficiency enables new uses which consume more energy, eating into the improvement.
* for small values of noone/nobody
Also lets not forget that back in the days of CRT everyone was still watching standard definition - which looks terrible on 60" displays (which possibly was also a deciding factor for TV studios using a wall of monitors rather than one big screen?). So there was a lot to be said for having the right sized screen to fit the output resolution. These days we have 4k and that will easily scale to 60 inches.
TFTs, being thinner/cooler/more efficient than CRTs, allow bigger and more common screens thus using some if not all of the energy saved from doing away with CRTs.
This is the same point as someone else made up the thread WRT the availability of RAM enabling developers to be more complacent about the efficiency of their applications.
At least with the RAM example (where more RAM enables developers to write heavier software applications) there is a definite causation. However with regards to CRTs, I think we'd have seen the same trend to larger screens even without the drive to engineer more energy efficient hardware (and in fact we did see that with plasma screens back when they were in vogue. Plasma was favoured for bigger displays because it produced better looking screens* despite LCD being more energy efficient).
* better viewing angles, refresh rates, sharper display, better contrast
That wasn’t my point, and was why I originally said that you and Sharlin seemed to be talking past each other.
LCDs took off because of their physical advantages (weight/thickness/heat, although the heat it produces has to be correlated with energy input) despite their shortcomings (fixed resolution, limited brightness and contrast ratio, response time) and plasma screens were an attempt to deal with those shortcomings but are now mostly dead. As you say, improved efficiency was correlative but not entirely causative.
A naive view would have been that as LCDs took off, their efficiency would lead to a drop in power consumption over CRTs. The Jevins paradox shows that not necessarily to be the case - bourne out by the proliferation of displays where previously there were none and in displays getting larger.
I think we'd need to run the maths before making any claims there tbh. We're getting dangerously into the realm of using assumptions as statistics. Points we'd need to consider:
* how much more efficient are LCDs compared to plasma and CRTs per square inch.
* how much did the trend to bigger screens proliferate with plasma vs LCD
* how has the cost of LCD and plasma screens changed over the last 20 years (this should be broken down by TVs with features such as smart TVs, 3D, HD, 4k, curved screens, etc)
* what about the uptake of said features on TVs?
* and lastly are those features only available on TVs of screen sizes > n?
* any other variables I've not considered? (I've only quickly thrown some thoughts together so there's bound to be some metrics I've missed)
I think the point you're making is a pretty hard conclusion to argue (or for me to refute) without any meaningful statistics to back it up. However it does still make for an interesting discussion so while the conclusion may remain unproven I have enjoyed the debate :)
I would probably argue that integrating more (oxymoronically) “smart” stuff into TVs might have made them less efficient too but it probably helped because of increased integration, fewer <100% efficient power supplies etc.
I think that's an overstatement regarding 60 inches. I still have a 27" Samsung LCD TV that was high end when I bought it, but I did some light research and it appears that if I limit my choices to new 4K Samsung TVs that are in stock at a NYC retailer, there are plenty of 40-50 inch options. There are also 30 inch Samsung TVs if you don't mind a lower resolution.
I think it's assuming too much to assume that everybody has a $500-$1000 budget and gets the biggest thing they can afford. Some people don't have that budget, and some people who have the money still happily take the savings now that prices are down from a decade ago. And some people don't get a new device until the old one breaks.
I think an educated person should be aware of Jevon's paradox, but it's overused and abused because people cite it dogmatically to short circuit thinking or fact gathering. Risk homeostasis is another similar idea - there's something to it, but it's harmful to reasoned thinking when people go around assuming it applies 100% without checking.
The 60" LCD panel takes up 3-8" front to back, which is dwarfed by the cable box, disc player, gaming system...
Likewise, people used to have one TV. Now, a significant majority have multiple. Not because they’re more energy efficient but the effect is the same.
Also for what it's worth, back in the early-to-mid 90s I actually did do a study on the number of TVs in an average household in my home town (it was for a college assignment). While my sample size was relatively small (ie only a few hundred people interviewed), I did discover the vast majority of homes had 2 TVs instead of 1 (which surprised me as I didn't live in a particularly affluent area). So I don't think it's quite true to even say most people used to only have 1 TV. Or at least that wasn't the trend observed by my study.
I suspect if you re-ran the study you’d get a number bigger than 2, which (qualitatively) is the point I was getting at!
However I do also think we've headed into the realm of using assumptions as statistics (as also discussed in my other post) so perhaps this is one of those occasions where our differing opinions cannot be consolidated?
Now for a car I can see that immediately. Whoa $60 for a tank of gas? Maybe I won’t go on that road trip or maybe I’ll use the bus or telecommute.
We Americans used to love our V8 engines in our big, boaty cars. Then we had an energy crisis and gas prices rocketed up. For a while, people bought more fuel efficient small cars with 6- or 4-cylinder engines. Then gas prices dropped and people started buying trucks. Then gas prices went up again and small cars became popular again. Then gas prices dropped and people started buying SUVs/crossovers... see a pattern forming?
Businesses will take shortcuts to save money because saving money is pure profit. And consumers will generally put their own financial needs above the concerns of the wider planet - because "what difference does one person make?" (a point I often read / hear). So the only alternative is to set mandatory guidelines for which products have to adhere to. Sure that will make products more expensive in the short term (R&D costs) but those prices will come down in the mid to long term and you end up with cheaper hardware to run (than if you just put electricity prices up) plus less energy consumption per device. It's a win-win.
But as I said, this is more of a European opinion than a US one - who tends to favour a lack of corporate regulation.
However I think ultimately there isn't a "correct" approach, just different opinions on the least disruptive.
On the other hand, with anything that consumes energy (whether electric, petroleum, or other), there is a lot of opportunity to influence behavior as there's both an up-front capital cost and an ongoing operational cost involved. Usually, the purchaser is paying directly for the operational cost. As a result, there's a direct path from the price of the consumable to the user of it. In these cases, incentivizing via the pocketbook can work quite well. And if that turns out to not be enough, more incentives can be added on the front-end via credits/taxes on the new/old thing to help shrink the price delta between them. So there's quite a bit that can be done before you get to the point of regulating. It also has the advantage of being easier to fine-tune than regulation.
Absolutely disagree here - if petrol was vastly more expensive, alternative technologies would have to come down in price and be more popular, so yes, even the poorer people could eventually afford them. By keeping the price as low as it is(and living in the UK it's hugely expensive compared to US and many EU countries) we're allowing huge pollution of the environment for the sake of affordability. Maybe by that logic we should allow coal-fired boilers again? They are still allowed in many places across the EU for that exact reason - because forcing people to switch to natural gas/electricity/ecopellets would punish the less fortunate people. But the "less fortunate" people are going to be fucked the most if we don't work on the pollution, which is sort of impossible if we're stopping the fight because of them in the first place.
I also don't appreciate your "Maybe by that logic we should allow coal-fired boilers again?" comment when I was very clear in my post - the one you're directly replying to - that we should be focusing on greener forms of electricity creation and energy consumption. If that does also drive up electricity prices then so be it. However artificially increasing electricity prices and expecting the market to do the honourable thing seems like you're trying to solve the problem by changing the end variable rather than fixing the root cause itself and letting market prices adjust accordingly.
Making energy prices include externalities would encourage the improvements you listed.
* "generally" because home servers / media centres / etc are often picked for their power consumption. As are mining servers (eg crypto-currency). But these are by far the exception rather than the norm in terms of consumer devices.
Why not impose a tax that doubles the cost of energy, and use all of the proceeds to fund universal basic income? That would both help poorer people and reduce energy usage.
Charging heavy consumers at a progressively higher rate will ensure the heaviest emitters pay their share without disproportionally affecting low income or light emitters.
How would we use less of it? Precisely by actions such as development time spent on tasks to reduce energy usage.. such as improving the CPU idle states (among many, many other such options).
That said, there are projects like Erlang on Xen  (a bare-metal-ish system) which enable unusual deployment patterns like spinning up a VM only after a request has been received, which could, presumably, make radically more efficient use of virtualised platforms. Not sure if anyone's done this in anger though.
Edit: I suppose higher prices would lead more people to do simple things like configuring their test servers to shutdown overnight and on weekends, mind
 http://erlangonxen.org/ , also https://news.ycombinator.com/item?id=5431392
Most (all?) microcontrollers have a wide range of low-power modes that disable more and more features of the chip to save power.
So if you only need it to do something say once every minute, you can usually save a lot of battery life by using the low-power modes.
> If the hardware doesn't make allowance for this, then the CPU will have to run useless instructions until it is needed for real work.
Do any consumer processors (think Intel iBlah) not support turning off the CPU when it's not needed?
Generally, most modern CPUs support turning off when not needed, however, this is generally referred to as power-on-standby (S3, IIRC). The CPU is off, most things are off, RAM is on.
The CPU itself has to continue to run because there is almost no timeperiod larger than a few seconds in which there is truly nothing to do and shutting down CPU cores and clocking the remaining one is efficient enough.
Pretty much until the early 2000's, PC processors essentially idled at full speed. There are probably some super-cheap, bottom-of-the-barrel, low-end ARM chips still being made somewhere that can't sleep or down-clock.
The aim was low-power-usage, but they came at it from a different direction. This is referring to the OS control over power-states on a traditional (clocked) processor.
AMULET gained its low-power capabilities from not clocking any unused functional blocks during normal usage, same aim, different strategy.
Its standard fare really. dropping into low power mode in an idle loop has been standard practice for that long also.
In embedded systems, the real difference is how you can schedule your application-level events to make optimal use of the low power states of both the core you're using, the other cores and on-board devices (e.g. FLASHs, ADCs, DACs, etc).
This is how iOS and Android try to have an effect by managing the applications use of timers and wakeups from external devices (interrupts) so as to maximise the 'sleep' time.
>"In this loop, the CPU scheduler notices that a CPU is idle because it has no work for the CPU to do. The scheduler then calls the governor, which does its best to predict the appropriate idle state to enter. There are currently two governors in the kernel, called "menu" and "ladder". They are used in different cases, but they both try to do roughly the same thing: keep track of system state when a CPU idles and how long it ended up idling for."
Could someone say exactly what "the governor" is? Its a code path in the scheduler? It wasn't clear to me from reading the article.
I don't remember if it was a true story or just a joke.
What causes this? I would have thought that "stop computing for a bit" would be a simple thing to do, but I clearly don't know much about processor design.
The quickest to enter and exit (C1) simply clock-gates the core. Caches are preserved. The next c-state might turn off caches too (and thus incurs the penalty of flushing caches on entry and starting with a cold cache on c-state exit). Further C-states might require even more work to enter and exit but consume much lesser power when in that state.
The cpuidle governor decides which C-state to enter since a deep C-state entry and exit may end up consuming even more power than keeping the system running or in C1.
Even if there is no change in voltage usually the processor will just leak power while making no forward progress on the program while the various timing circuits change.
IIRC the majority of phones out there use tickless config, so the ARM CPUs that benefit here are the server-class ones. But good show, indeed!
It has also given name to the human activity of "NOPping", similar to zoning out.
This reminds me that earlier operating systems like DOS and Win9x kept the CPU in a busy polling loop when idle --- which was great for responsiveness, but not power consumption nor heat; applications like http://www.benchtest.com/rain.html soon appeared, which replaced the idle loop with an actual HLT loop and actually had a noticeable effect. The DOS version is at https://maribu.home.xs4all.nl/zeurkous/download/mirror/dosid...
Likewise, given that some OSes APIs (syscalls) provide both non-blocking and blocking modes, should we prefer the blocking ones concerning energy efficiency?
On a single CPU system it works because when the current process stops running (calls sleep, does IO, etc...) the kernel looks to see if there's anything else to do, if not, it knows the system is now idle.
1. This is an exaggeration but the idea is that they have nothing to do because no one is buying or using them anymore (since the Intel vulnerabilities were uncovered).
2. A platform is a kind of software. A shelf is a kind of platform (different definition). Before items such as Intel CPUs are sold, they sit in warehouses for a while. Intel is not good for running any software platform so the best platform for them is a shelf... Also alludes to the fact that no one is buying them and that no one should buy them. (Also exaggerations).
This would be great for CryptoNote Webminers running WASM in the browser to help users understand you don't have to juice every thread/cpu in order to effectively mine at scale using a proxy like the one provided in Webminerpool.
You mean "in order to spend an extra $2 in power to make 3c for someone else"? Very effective.
As long as there are more price-efficient mining pools which are an appreciable fraction of mining power, it will not be cost effective to mine anywhere less efficient since margins will naturally approach what those larger pools can support.
A consumer desktop will never be able to compete with a centrally cooled data-centre which likely gets special power rates and was intentionally built in a location power is cheaper in. Especially not if it's having to go through wasm.