Hacker News new | past | comments | ask | show | jobs | submit login
Why Do Computers Use So Much Energy? (scientificamerican.com)
221 points by tshannon on Oct 4, 2018 | hide | past | web | favorite | 166 comments



> Precise estimates vary, but currently about 5 percent of all energy consumption in the U.S. goes just to running computers

30% goes on transport.

> the human brain is a computer. This particular computer uses some 10–20 percent of all the calories that a human consumes

So society's computer uses 5 percent and our own, evolved over millions of years, uses up to 20.

I'm not saying computers can't or shouldn't be made more efficient, but relatively speaking, the title of this article is pretty bizarre. Computers don't use very much energy compared to almost everything else in the world.


My thoughts exactly.

When reading the title I thought "They do?"

My laptop consumes the equivalent of approximately a single 60w incandescent lightbulb that used to be required just to write on a pad of paper in the dark.

It's capable of quite a bit more than that, however.

EDIT: Updated power specifications to exact amounts.


If your laptop consumed that much power it would be too hot to use on your lap and also would probably damage whatever surface you put it on. Just because a power supply has a max rating of X Watts doesn’t mean it’s always using that much. Even then, 200w for a laptop is an enormous amount. Maybe you meant LED bulbs that emit the equivalent light of a 100w bulb? That would be much closer to standard laptop power consumption.


Indeed.

I have a Thinkpad that has an i7-7700HQ, that's a 35W TDP beast and even under very heavy load I still get 90 minutes out of a 72 watt-hour battery and it gets pretty warm (and the TP has excellent cooling), 200W would melt your legs.


Intel TDP is a sham - it's based on the published non-turbo frequency which is 2.8GHz.

My primary laptop has its CPU undervolted (offset) by whopping 170mv, effectively around 30% lower power consumption.


I guess that's what active cooling is for. So your legs don't have to take the brunt of it. Anyway, a laptop's power consumption isn't concentrated in one chip. So combining the wider area that dissipates heat with the fact that hot spots are actively cooled means a laptop can easily go over the above mentioned 60W for prolonged periods.

And power consumption tends to add up from sources that are otherwise not very power hungry themselves (compared to a CPU or GPU). A few W with the LCD, a few with the SSD/HDD, etc.


> If your laptop consumed that much power it would be too hot to use on your lap

A lot of laptops do actually get too hot (and are too heavy) to effectively be used on laps. Some even require two concurrent power connections from two separate power bricks in order to charge!


Pretty much all laptops have a sticker underneath that tells you not to use them on a lap.

My old dusty i7 laptop did get much much too hot though.


Your laptop consumes 200w? Maybe I'm not reading this right, but I have a modern 3.5Ghz/8-core server in the closet which runs a small army of VMs and its from-the-wall draw never exceeds 80w (about 35w on idle).


My laptop is far from modern. It also has to power an old screen.


Wild guess - the disconnect here is that you're interpreting the maximum power rating of your laptop's power supply as an indication of what it typically draws.

It's more an indication of what your laptop would draw if it were doing GPU-accelerated deep neural net training while recharging a completely drained battery, plus a healthy margin of error just to be on the safe side.


200W still seems really high. A +10yo MacBook Pro used an 85W charger, and it rarely if ever reached that max. I find it hard to believe that even an old gaming laptop would require over twice that.


Can we please stop the guesswork in this thread, comparing gaming laptops to 10 year old MacBooks, and look at the data? Just shy of that 10 years, in 2009/2010, mobile Nehalem processors were introduced, and more importantly, early 2011, mobile Sandy Bridge arrived. Sandy Bridge was super powerful. How much powerful?

2 years, 2 generations before to Sandy Bridge: Core 2 Quad Q9000 (Q1 2009) -> i7 2820QM (Q1 2011) = +100%

Sandy Bridge to 6 generations later, 6.5 years later: i7 2820QM (Q1 2011) -> i7 8550U (Q3 2017) = +37%

Yes, I'm mixing tick-tock cycles or low voltage with "standard versions". So let's see a more appropriate competitor:

Sandy Bridge to 5 generations later, 6 years later: i7 2820QM (Q1 2011) -> i7 7820HQ (Q1 2017) = +50 %

So we had processors that doubled the performance in 2 years (and more than double if we disregard not-so-common Core Quads and take often only two-core mobile processors). Yet, 6 years later, there was only a 50 % increase. Until Ryzen came recently and made Intel to quickly come with Skylake-X, i9s and FINALLY, OH FINALLY, 6 (powerful) cores in laptops, there was no real performance reasons to upgrade, only for heat or thickness, those went down, performance didn't went up so much.

So, around that 2011, those were that crazy times, when you could have:

45 W or 55 W CPU (i7 2820QM or 2920XM)

55 W GPU (Quadro 2000m)

two SATA drives

four DIMM modules

In a non-gaming, back then "standard" size laptop (ThinkPad W520). And under heavy load, anything under Tjunction (say, 95 °C) was good enough. Some laptops with only a single 55 W dissipating fan (such as the ThinkPad W520) often throttled under full CPU and GPU load and couldn't run Turbo with full GPU load, others (such as Dell Precision M6600) had two separate fans and some even came with Quadro 4000m GPU with a TDP of... 100 watts!

So, back then, those laptops ordinarily came with 170+ W power supplies (even over 200 W) and just a CPU+GPU could draw well over 100 W (add the rest of the systems, hard drive here, hard drive there, all four DIMM slots populated, charging the battery) and it would easily go over the twice of that 10 year old laptop with 85 W charger.

And these are workstation models. If you mention gaming laptops, those sometimes had desktop CPUs in them and required 300 W PSUs. Nowadays they also use desktop GPUs and require TWO 300 W PSUs...

Please, don't base observations on 10-year old Tick (probably Penryn?) with 85 W chargers, when 7-8 years ago, a Tock (Sandy Bridge) came with literal concrete bricks as power supplies and really pulled 200 watts. Thankfully, things got dialed back after that, but at the cost of performance until now and current mobile 6-core i9s and Xeons.

And to the lap comments - you really didn't want to have those laptops in your lap when they were plugged in and turbo-ing...


you will need to revise the power once more: when idle modern notebooks use 5-10 W, even under load it's closer to 30 W.

These guys measure idle power consumption for all laptops

https://www.notebookcheck.net/HP-Elite-x2-1013-G3-i5-8350U-S...


I was going to say that you could write just fine by candlelight instead, but it turns out a candle uses about as much energy as that light bulb (based on random internet spot-checks).


The increase in energy-to-services efficiency of lighting has been stupendous. If you go all the way back to brands or torches (the burning sort), candles, oil lamps, gas mantles, incandescent, fluorescent, and now LED.

I have to keep reminding older concerned family members that a light might consume a dollar's worth of electricity per year, or that the bright overhead floods actually consume half the power of the 'low-energy' under-cabinet system (incandescent).


Actually, thinking back on this, I need to do a different calculation. The candle is directly converting hydrocarbons into heat and light. If you burned those hydrocarbons in a power plant, how many watt-hours would be produced? (For that matter, how long will that candle burn for?) Ignore delivery/transmission costs: I'm curious what kind of losses there are in burning (say) oil for electricity vs. just setting it on fire, in the context of lighting.


You're forgetting the vast network of much more energy-hungry computers your laptop is connected to.


The contrast! My laptop can consume as little as ~3-4w when I'm just scrolling a webpage.


>So society's computer uses 5 percent and our own, evolved over millions of years, uses up to 20.

This comparison doesn't make a lot of sense to me. It's not even comparing apples to oranges, it's comparing apples to the concept of taste.

Furthermore these percentages don't really tell you much regarding efficiency, I'm sure there are less developed societies where computers count for less than 5% of all energy consumption, doesn't mean that their computers are more efficient.


Why not, though? Energy is energy -- we (humans) oxidize food, basically burning it slowly, to power our bodies. Whether the energy is carried by glucose/ATP or electric current is just an implementation detail.

Comparisons are frequently made in other domains as well -- e.g., cyclists measure power outputs in watts.

The rule of thumb I've understood is that the brain runs on about 20 watts (peak power), and a human on average runs on about 100 watts. That's 8.64 MJ/day, which converts to 2064 food calories (kcal), at 4184 J/kcal.


One is saying how much computers use of a global resource pool, the other is how much brains use of their host body. They're completely different things so it doesn't make sense to compare those two percentages.

If you wanted to compare them, cite the percent that the CPU uses of its PSU, or cite the percent that human brains use of global energy consumption. Otherwise they're completely different things that have no meaning in relation to each other.


Exactly. It's a meaningless and even misleading analogy.

If forced to draw a comparison, you could maybe talk about the energy dissipation of a single neuron and compare that to, say, the energy dissipation of a single transistor. And in that case, considering a human brain is powered by something on the order of 10^10 neurons, the energy efficiency of the brain comes out looking pretty good.


One is viewing society as a "body" with computers being its "brain". The other is viewing the human body as a "body" with its brain being its "brain". The comparison is of the energy used by each "brain" relative to the total energy used by its "body".

The comparison is valid if you accept the analogy, although I think you could easily dispute the analogy. E.g. you could just as easily view the network of humans in a society as its "brain" and all the things we build (including computers) as its "body".


> you could just as easily view the network of humans in a society as its "brain"

You could far more easily view the brains of humans in a society as its "brains."


> Energy is energy

Even this isn't quite true, since it's not fungible, except in the narrow use-case of creating heat[1].

> Whether the energy is carried by glucose/ATP or electric current is just an implementation detail.

This implementation detail means that converting between these forms incurs a tremendous cost (in lost energy) and, perhaps more importantly, is nowhere near instantaneous.

[1] Even the slightly broader use case of heating (e.g. increasing the temperature of a home) can be done with a heat pump, and those don't tend to run off of fungible energy.


Calories and can be converted to Joules. The amount of energy to run a computer (a human computer or a physical one) is interesting to me personally, as is comparing the amount of extra calories (and thus, extra emissions) it costs me to ride my bike 20 miles to work vs. driving a gasoline car.


Agriculture + metabolism has a much lower emissions per km than gas powered cars.

Also, your emissions are ultimately plant based, so the only real net emissions are due to producing fertilizers and machinery for agriculture and the embedded emissions in your bike, to the extent those are oil/gas/coal based.

[1] https://ecf.com/news-and-events/news/how-much-co2-does-cycli...


I second this, in order for it to be a fair comparison we have to level the playing field, by converting the total available calories (for humans) and electricity (for computers) into some common unit that represents the total capacity available to the US (or world). Another commenter brought up Joules which I think would work, I just can’t fully wrap my mind around what all will go into the calculation, like food growing and transport energy requirements, vs power plant emissions, maintenance, coal/oil transport.


The article goes on to explain just how wasteful our computers are, compared to the computations that go on in cells (which is a staggeringly large amount of computation, given their speed and abundance).

> But the thermodynamic efficiency of this computation—the amount of energy required by a ribosome per elementary operation—is many orders of magnitude superior to the thermodynamic efficiency of our current artificial computers.

So I think the headline is justifiable in light of this comparison.


Don't get me wrong, I'm well aware of how inefficient computers actually are compared to cells. But I just don't really see the relevance of the comparison.

Computers are so much more efficient that so many things in our world that we could much more easily improve the efficiency of: it would be great to make artificial computers as efficient as our brains but sinking resources into the effort rather than addressing much more pressing low-hanging fruit isn't really logically sound.

Of course it's not a zero-sum game: we can sink resources into both, but the alarmist headline is still hyperbole for this reason.


The comparison is misguided, I agree. Those teeny 20 watts the brain is using are extracted by killing plants and animals. How about measuring the amount of death per computation?

Also, let's not forget about the enormous time, space and energy used by evolution to make humans. Nothing we invented can rival that kind of resource usage.

Even in Deep Learning, performant models require more data and compute. You can't obtain the same accuracy from running a computer for a day compared to 500 TPUs for a week. Humans have had lots of time to train and evolve, computers not.


What about bacteria? Unfortunately weighing the value of life is hard. As cute as kittens, and spiders, daffidls and protozoa are, we aren't equal. And evolution did not 'make humans', and we are not the intended product of the big bang. Seems the ultimate hubris to bill the total energy expended by the universe so far to the creation of us forum posting meatbags.


There would be no forum posting meatbags if not for evolution and its enormous resource usage. While I agree that nature did not 'intend' to make humans from the beginning, we, as humans, can't just write it off. Those bacteria are an important step in our cellular apparatus, and people are comparing computers to brains (brain cells at work), here.

And as long as we can't get our energy without killing other living organisms, the energy brain uses can't be compared to the energy computers use. It's physically the same, but morally not. Of course you could say computers are here because humans have killed living organisms to feed themselves, so computers partake into that, but we'd end in a loop and never reach any conclusion.


> And as long as we can't get our energy without killing other living organisms

It is perfectly possible to fuel a human only from biological sources that have died a natural death. It's exactly the same as running a computer on fossil fuels, just without the millennia of waiting for sufficient accumulation that you need for building a whole industrial base in one short flash in the pan moment (compared to the millennia of accumulation).

Actually, you don't even have to wait for organisms to die, you can even use fuels where the providing organisms live on, having just shed off a part they don't need anymore. No killing involved in eating an apple. And if you plant the seeds in an advantageous environment you are even taking an active role in their procreation, as the seeds that just drop into the shade of the parent tree won't thrive there.


I don't see how you're reading this headline as "alarmist". It's a valid question worth exploring.


You clearly can't compare those two percentages.

One of them is over the total energy consumption of the society and the other over the energy needed be keep a human alive.

Computers would not get "magically" more efficient if another industry triples to overall energy consumption of our society, yet that is what your analysis would suggest...


Did anyone click the link? It's comparing to the theoretical limits and the potential of minimal-energy reversible computing[1].

When you talk about "computers using so much energy", the span of inefficiencies could include:

A) Why so much energy per primitive operation (e.g. write bit to data store, XOR two registers)?

B) Why so many primitive operations per desired computation task (sum these three numbers, find the largest of these five numbers)?

C) Why so many computation tasks per unit of economic value delivered?

The article is talking about A). I'm pretty sure we've improved on A) over the history of computation.

I think we've gotten worse on B) per Metcalfe's law (Intel giveth and Gates taketh away).

[1] Called "reversible" because thermodynamics says the only reason you need to dissipate heat/usable energy is because you perform an irreversible operation, and to avoid that you'd have to do your computations using only reversible logical operations -- where the input space has a one-to-one mapping to the output space.


I would be surprised if the 5% that goes toward computing didn't knock down transportation from 35% to 30% through improved logistics alone. My gut feeling is computers are a net positive energy wise.

Also what percentage of that 5% goes to mining bitcoin, or re-downloading jQuery for the 5000th time?


I'm sure most of the computation to run society occurs in the human brains of the millions of workers.


Others have commented why your comparison doesn't really make sense, but you should respect the fact that the human brain is actually incredibly efficient if anything.

https://www.scientificamerican.com/article/thinking-hard-cal...

> So a typical adult human brain runs on around 12 watts—a fifth of the power required by a standard 60 watt lightbulb. Compared with most other organs, the brain is greedy; pitted against man-made electronics, it is astoundingly efficient. IBM's Watson, the supercomputer that defeated Jeopardy! champions, depends on ninety IBM Power 750 servers, each of which requires around one thousand watts.


You're comparing two different energy sources - Computers use 5% of the energy we produce, whereas human computers use 10-20% of the food-energy we produce. Computers certainly use more energy than all the human brains in the world.


For anyone heating their home/office in the winter, the energy consumed by their computer effectively has zero cost.

The computing done is value added to energy-use that would otherwise dissipate through the walls and ceiling. Those of us in northern latitudes should be lucky enough to have a couple of servers in the closet helping to pay for heating.


Only if you’re heating your house with electric resistance heat.

Heating with natural gas or heat pump is more efficient (in terms of BTU per dollar) than using the waste heat from computing (which is resistance).


True, for those lucky enough to afford that option (what % of homes have HP's?). Then there's this caveat:

Heat pumps are not great in cold climates. They are most effective at saving energy when in the heating mode. In a cold climate, however, your house needs more heat as the temperature outside goes down—but the heat pump works less efficiently at lower outdoor temperatures. Below a temperature known as the “balance point,” normally from between 30 and 45 degrees F., supplementary heat is required—and that means expensive electrical heating kicks in. https://www.hometips.com/buying-guides/heat-pumps.html

My home town seldom reached 30 degrees from November to April.


That reference is outdated or incomplete, imo. Several manufacturers have air-source heat pumps that work well into the teens and Fujitsu has one rated to -7*F without resistance heat.

Lots of so-called hyper-heat or low-temp heat pumps (often mini-splits) in cold New England. Affording a heat pump is not a high hurdle. You can get a single zone mini-split (heat/cool) for under $1500 with an install typically well under $1000 without a lot of shopping in most of the US.


Compared to a heat pump, that computer is still costing money, cause a heat pump can add more heat to the house then an electric heating element using the same amount of energy.


Yeah, but who moves their computers outside of the closet in the summer? Then we have a larger AC bill to cover, and that's less efficient than heating.


AC is almost always more efficient than heating because AC systems use heat pumps.


A/C moves heat at some energy cost. It's not 1:1 with heating.


> Computers don't use very much energy compared to almost everything else in the world.

In other words: There are plenty of things in the world that use an awful lot more energy than computers.

That fact doesn't make computers any more efficient.


physicist here. Stop comparing numbers that don't mean the same thing. You're making my eyebrows twitch.


It would make more sense to compare absolute numbers.

A brain uses about 20W. A modern computer has similar consumption, but is many orders of magnitude less powerful.


A single modern CPU can crunch more numbers in a second than every person living on earth combined. Sure there may be things that it's not capable of, but most things it is capable of, it's usually 8-10 orders of magnitude more efficient at.


The brain still tops every CPU at visual processing, which may be its highest throughput interface.


It’s almost like brains and computers are somehow different.


That really depends on how you define "powerful". The computers are less capable in several areas, but also more capable by several orders of magnitude in other areas.


You can’t say it’s less powerful, it’s just different. It’s better at some things, worse at others, mostly because of the software.


Eh, my brain may be much more efficient at problems that involve processing complex sensory input and making split second decisions - but it’s absolute trash at doing complex, repetitive maths.

Our brains are highly specialized, you’re comparing an ASIC strapped to a Z80 with modern general purpose CPU’s.


I'm not sure "powerful" is the word you're looking for. It's order of magnitude less _general_.


Well, evolution has spent millennia optimizing human brains, and millions of years optimizing brains in general, while we have been building computers for ~70 years, about a human lifetime.

And for much of those 70 years, energy efficiency was not the top priority.


Yeah but biology never really had any choice about technology - it was always going to be cells and salts. Its really not about optimization - its about how lucky we are that real intelligence (if that's what we have) can even be achieved using cells and salts.


My brain (or even my whole body) can't play HD video.


No, it plays way better than that.

Our eyes do not see. Our brains do. And the upscalimg from limited data is amazing!


That's largely because the brain is not clocked synchronously, something like 30% of a modern CPU is lost in clock trees. The brain wakes up the bits it needs on demand almost at a per-gate (neuron) level.


Maybe we could also say that our civilization is energetically compute equivalent to human level intelligence when we are using 20% of our energy on computers? Alternatively, if the human brain is orders-of-magnitude more efficient than a computer (as argued in the article), then a “civilizational human-level intelligence” is, likewise, orders-of-magnitude ‘away’?


The human brain is a hell of a lot more powerful than a computer.

https://bgr.com/2016/02/27/power-of-the-human-brain-vs-super...


>> Computers don't use very much energy compared to almost everything else in the world.

So highly parallel system are closing in on Teraflops per watt, which sounds amazing to me. But I really have no idea if that's high or low power.


comparing these two percentages seems very esoteric. Heat dissipated by a cpu (just a function of thermodynamic inefficiency) vs fraction of electricity used by society to use computers (a function of a billion human things like economical ease of access to computers, quantity of computing needed for entertainment, availability of resources to build computers)


>So society's computer uses 5 percent and our own, evolved over millions of years, uses up to 20.

Organic brains are also learning and doing language/image/sound processing that far exceed society's computer capabilities.


The human brain isn’t a “computer”, it’s way more complicated and can’t be described in terms of inventions it has created.


You could probably cut that consumption in half if people stopped using interpreted languages in production, quit telling curious young programmers that early optimization is evil or that memory is an implementation detail, etc. etc.

The computer is very efficient, electrical engineers toil and sweat a great deal to make them so. Then we build things with Ruby and Python and make things 2 orders of magnitude less efficient for, I think, no good reason. What makes those languages pleasant to use is not strictly tied to them being interpreted.


> no good reason

If it takes a developer hours to build something that would otherwise take days, that's a very good reason. Processing power and energy are a tiny fraction of the cost of dev time. For things where scale matters, companies switch to faster languages with more optimization. This is irrelevant for what, 95% of businesses? Granted, probably half of devs are working for big tech where it does matter, but it really doesn't for a lot of us.

In any case, Javascript, which most of us are using more and more, is not that much slower than Java or other low level languages and can be used everywhere.


>energy [is] a tiny fraction of the cost of dev time

I think that is in a lot of cases not true, just that the developer (and his company) is not paying the bill but the buyer/user. While individually this might not be a lot, therefore the individual will not complain about it either, for applications with lot of users it will add up a lot. Let's say you have a application or a device with 1 Million Concurrent users, and you could reduce the power consumption by just 1 Watt per instance, this amounts to nearly 9000 MWh yearly, which as a total cost for end users in germany would amount to more than 2,5 Million Euro (+ the climate emissions). I am quite sure that in a lot of instances you could reduce the power usage by 1 Watt and that it would cost less than even 10% of the aggregated savings. Even simple things like game menu screens having no frame limiters and putting the GPU under heavy load have been a thing in the past (e.g. Diablo3 at some point). The issue is that the developer has no incentive to improve such things, except in the most extreme cases as he doesn't benefit from it but from a macro-economic and ecological perspective it would be beneficial.


Let's be clear about this: high level languages like python and ruby in production use are doing server side web operations, most of which aren't even very intensive - the assertion that this is being offloaded to the user is false. Anything that a low level language would replace is server side. A Java web app still needs to serve the same javascript that a Ruby or Python app does.

Javascript is used client side, and again, it's quite fast, with performance comparable to Java and C++ in most use cases. And a web app is basically never going to reach the cpu/gpu levels that even the lightest game coded in C++ will reach, with rare exceptions.


> no good reason

It's a trade-off. You can compute everything in C, or some things in python. In the former scenario you're trading off software engineering cost with computational efficiency. Since software engineering is insaaaaaanely expensive (I code stupid stuff all day and am paid $150k/y) and processors are dirt cheap compared to them, it makes a lot of sense to do some computing in python/ruby/javascript.


Is time or energy a more scarce resource?


There's also the question of how little energy is enough to perform computations, and I've enjoyed greatly a video on the subject by PBS Space Time:

"Reversing Entropy with Maxwell's Demon | Space Time"

https://www.youtube.com/watch?v=KR23aMjIHIY

Turns out even in case of an "ideal Maxwell'S Demon", the memory operation required to handling pass/stop necessitates certain energy use.


The response to comments at the end is amazing. Subscribed.


> Why Do Computers Use So Much Energy?

Ads.


Javascript too.


Improved some in the past few years, but Anti Virus as well.


Go green, go HaikuOS


Marketing, industry structure

1: People will buy what's new and shiney.

2: companies produces what can integrate into what sells

3: loop


Don't forget physics.


Also, wide character sets. ASCII is all anyone ever needed.


I don't understand. The prevailing encoding (utf-8) is only as wide as ASCII is for encoding the same characters, so you are getting what you want here, unless it's an internet without other cultures and languages on it.


Kind of shocked to be reading this... have you ever made software for non-English speaking users? ASCII is not enough.


I'm pretty sure this is just Poe's Law in action.


Clearly! Silly some of us, persisting in our non-æmerican ways.


Don't know if you're trolling or not, but I fully agree. So much time could be saved by making the networks and internet ASCII-only


What if a client/user doesn't read/write English?


Time would be saved because they won't use the system. /s


teach them one international language from the start of their life


Esperanto uses characters outside of ASCII.


that's probably why its not the international language I meant lol


>These analyses have provided some astonishing predictions. For example, we can now calculate the (non-zero) probability that a given nanoscale system will violate the second law, reducing its entropy, in a given time interval. (We now understand that the second law does not say that the entropy of a closed system cannot increase, only that its expected entropy cannot increase.)

I think they meant to say decrease in the parentheses. But that's tantalizing. I'd like to hear more about that.


What i don't understand is how a phone can be so powerful with so little watt usage. especially compared to a PC


1) Your phone is mostly sleeping (extremely low power state). Your laptop also uses very little power when sleeping. That is why your laptop can sleep for a week++ but can only run for 3-8 hours.

2) Your phone is not nearly as powerful as your laptop despite similar specifications of the CPU speeds. Pretty much everything other than the base CPU clock rate is much, much slower. In addition, your phone CPU down-clocks itself due to thermal limitations where your laptop has a fan and much better heat dissipation capabilities. As a result, it is almost never running at the maximum rated clock speed.

On a YouTube talk I watched (skewering web site speeds and JavaScript overloads), the speaker's iPhone, with CPU specs similar to his laptop, was 25 times slower than his laptop when rendering a (JavaScript heavy) benchmark page.


Sounds a bit like "Progressive Performance (Chrome Dev Summit 2016)": https://www.youtube.com/watch?v=4bZvq3nodf4


Yes, thanks!

Where he talks about benchmarked speed vs. marketing numbers: https://youtu.be/4bZvq3nodf4?t=676


JS on iOS is probably not jitted since writing to executable memory is prohibited on iOS. This is not the case for desktop.


Phones are much less powerful than PCs, but they feel comparably fast because most software developers waste exactly as much performance as they can get away with. The phone app developers are forced to write better, faster code than the PC app developers, just as PC app developers ten years ago were forced to write better code than PC app developers are today.


So so true. Every time I go back to programming microcontrollers in C I'm amazed at how much can be done with just a little code. We waste so much on fluff. But then, we do enjoy the fluff.


Fluff in theory should equate to productivity.

Sometimes we stop just a bit too soon. I’ve found a great rule of thumb is that once you’re code complete on a module, put in an extra 10% of work with the profiler to try to improve performance. Keep doing 10% spurts until the performance improvement of the last spurt is less than 5x.

There’s almost always low hanging fruit which will return at least 10x. Sometimes you find multiple places where you get 10x and your code ends up more than 100x faster.

And at that point all your unit tests are written and green so the refactoring is pretty quick. Right? ;-)


It helps to have performance targets. Like 60fps, sub-100ms ui reactions, no stuttering, throughput matters or not, timing attacks are a concern or not.

It's often the case that optimizing a special case 10 times increases throughput, but makes the user experience worse (because it starts stuttering), makes the code ugly, and allows for timing attacks. So, in many contexts it's actually bad idea.

Half the work in optimization is measuring the correct thing.


> Phones are much less powerful than PCs, but they feel comparably fast

I beg to differ.

My Droid Turbo is powered by a 2.7 Ghz quad-core CPU, yet often is incredibly slow. If I want to open Waze and enter and address to navigate to, it's often 10-15 seconds from the time I tap the Waze icon on the launcher until the apps is loaded and the on-screen keyboard has presented itself for typing.

There are significant stutters when I'm trying to scroll while a web page is loading.

Fallout Shelter takes nearly an entire minute to load.

Sometimes, plugging in a USB charger causes the phone to basically become a brick for half a minute or more, as if there are tons of apps deferring processing until I'm on external power.

Overall, it often doesn't feel any more responsive than my original Droid back in 2010, which was only a 550 Mhz single core.


There’s a lot to blame — the software bloat, delivery mechanisms, user-hostile decisions that prioritize ads, and all the rest. But clock rate is truly meaningless as a performance measure when the underlying architecures are different...


> software bloat

This came up on another thread here from a blog/article titled "software disenfranchisement". I mentioned that I think a lot of it is an unhealthy obsession with writing re-usable code that results in abstractions on top of abstractions piled onto more abstractions to the point where we get the infamous FizzBuzz Enterprise Edition [0] that I'm sure nearly everyone at HN has seen. We also have a problem of people statically linking massive libraries when they only need a small fraction of its code.

> user-hostile decisions that prioritize ads

Definitely a problem in mobile web browsing. Ridiculous that on a data-limited connection, ad networks still try to push video ads or ridiculous JavaScript code.

> But clock rate is truly meaningless as a performance measure when the underlying architecures are different...

Definitely true. ARM gets a lot less done per clock tick than x86. But I imagine a lot of the performance differences are done on purpose as a compromise to reduce power consumption, since ARM is so big on mobile platforms. This is entirely speculation on my part, though.

[0] https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...


There are a lot of efficient PC's available. My HTPC sleeps when not playing and when streaming 1080p/4k videos only uses about 25 watts... which isn't bad considering its streaming from a NAS with plex through to an Xbox One.

Phones, on the other hand, are highly specialized - PC's are general purpose. Trade off is often power requirements and performance.

My 8 core 16 thread Ryzen Gaming PC smokes anything my phone can do and i guess if i really wanted to compare i could do a taskper watt calculation and see how they compare.

When i use my phone like a computer, its power draw increases and i get just a couple hours of battery use. play fortnite on your phone and it goes dead quick.


It’s also a consequence of constraints. There has not been a pressure to make desktops more energy efficient, since they’re attached to a permanent source of incredibly cheap power. Even laptops spend a large portion of their time wired in.

Phones on the other hand are rarely plugged in, and consumers clearly demand more battery life. This has led to resources being poured into cell phone battery capacity and into efficiencies in power usage.


I agree with your point, but I run a cluster of 6 ODroid systems (running an Emby server (which only runs on one of the nodes)) , and even then it peaks at around 50 watts last time I measured (the hard drive array is separate and peaks at around 30).

I haven't done a ton of side-by-side comparisons, but it does feel like ARM runs more efficiently per watt.


I suppose something like an i7 or Ryzen would be mostly idle running a similar load, and would consume a comparable amount of power, plus a lot of power to keep its hi-perf features, like huge caches and other, idling cores, up and ready to take load any nanosecond it arrives.

Phone SoCs specifically address that by having 2+ CPUs of varied power, and having the high-power CPUs completely shut down and powered off when idle.

Same applies to GPU cores that eat significant power when idle, and a lot of power when fully loaded. A PC with a 600W PSU likely features a large GPU that can consume e.g. 300W under full load, doing 60fps photorealistic rendering, or training a neural network. Phones just don't have this kind of hardware (they have much smaller ans less complex GPUs), and, again, manage to keep them powered down when not in use.

Again, a PC drawing 50W or less while doing light office work is a pretty normal thing.


It is definitely the case that pushing raw performance erodes perf / watt on a given process and architecture. You could slightly underclock and moderately undervolt many CPUs or GPUs to see a fairly significant gain in perf / watt.


Power consumption is more or less square to per-thread throughput.

Power consumption is between square and m^n to inter-core bandwidth and count.

So, a high-end smartphone CPU might do 1/5th of the throughput of a desktop CPU using 1/25th of the power.


Those formulas seem highly unlikely to me. Power consumption is generally proportional to clock speed. And I don't know what m^n even means in this context.


I'm trying to understand what blattimwind is saying here, but this might be a case where deep understanding has suffered from a badly worded post.

Just guessing here, by pulling apart what little is in there:

* Total power usage has a big fixed parts, e.g. screen power usage going up linear with brightness and screen size. Or RAM power usage goes up linearly per bit just to keep the memory intact. So Let's assume OP talks about only the 'clocked' components, like CPUs and busses.

* I did hear for single core CPUs how the power usage arose with the square of the CPU speed, unless offset by die shrinking or technological advances. This was one of nature's walls hit by the Pentium IV. Can't find a good reference atm, but here http://www.tomshardware.co.uk/answers/id-3025075/higher-cloc... is a page with the first graph looking square-ish being what happens with power usage when clock goes up on the same machine.

* And throughput is linear with clock speed, if nothing else changes. If you double your clock, you double your throughput just by moving more data over the same bus. So the third line of OP is consistent with this: 1/5 of the throughput is reachable with 1/5th of the clock, thus 1/25 of the power.

* First line might compare 2 ways to raise the throughput: Either rise the speed of 1 core or add more cores. When you rise the speed of 1 core, power goes with the square of this rise. When you add more cores, power goes linear with these cores, assuming your interconnect is free.

* Second line 'between square and m^n' needs some interpreting. It might make sense when for a square n=2, and m measures frequency/throughput/core count/... . I assume he says the real relation might be power usage = throughput^2.3 for example

* When you have m cores, and each is interconnected to its m-1 brothers, you have m*(m-1) interconnect elements using power. This is again a square law. You can get around this by not connecting everything to everything, but this will cost you in data throughput - There will be bottlenecks.

Thats what i can make from it.


Update: The link between power and frequency seems to be complicated. See e.g.

https://physics.stackexchange.com/questions/34766/how-does-p...

First answer says P goes linear with f, based on theory.

Second answer goes for cubic or worse based on some deep papers, but a comment says this is dated.

Third answer has a graph from real-world measurements which is clearly worse than linear, and a comment saying there is an exponential part somewhere there too, but it is swamped by linear factors.

WIkipedia, by the way, goes for linear - https://en.wikipedia.org/wiki/CPU_power_dissipation#Sources

I think the most realistic answer I can give is: It is complicated, and OP is a very good nerd sniper ;-)


A phone, while powerful, is still nowhere near a modern PC in terms of performance. As you push for more and more performance, you’ll pay increasingly more, both in power and money, for increasingly smaller gains.


Because they aren't both efficient and fast at the same time. They have two sets of cores. Fast out-of-order cores that will burn through your battery in a few hours (this is what is generally measured by benchmarks) and slow efficient in-order cores that can last one or two days in standby.

If you actually use battery intensive apps like google maps' you will quickly realize your battery isn't going to last very long.


This is only true when comparing older PC hardware.

There is simply no comparison to make when we are talking raw compute power. If there were, the DC market would be dominated by x86 processors with stupidly low power consumption. The cost savings from the smaller power/heat footprint would be absurd.


There's a lot less of it, and PCs aren't nearly as optimised for low energy usage.


Laptop cpus are still quite faster for complex computation, on a phone it's rarely the case and they have gpu/npu as coprocessor to avoid wasting energy there.


They are compared to the amount of work they can do per watt when compared to the original IBM PC.

Old school mainframe power suplies are probably even less efficient per what.


While the article deals with theoretical limits, there are some practical limits. As they are already coming up in other comments might as well list them out:

[...] There are several factors contributing to the CPU power consumption; they include dynamic power consumption, short-circuit power consumption, and power loss due to transistor leakage currents: [...]

[1] https://en.wikipedia.org/wiki/CPU_power_dissipation

[2] https://physics.stackexchange.com/questions/34766/how-does-p...


CMOS especially, which consumes energy in direct proportion to the square of the frequency because a level crossing is a virtual, instantaneous short-circuit across the chip's power rails.

As such, future chip will likely be driven to reversible / adiabatic tech because cryptocurrencies, AI and more are guzzling energy ($$$ and running more circuits) and dissipating more heat that can be sensibly handled in a given unit volume (per-socket thermal characteristics; chilled water plant capacity; forced per-cabinet cooling or submersed in mineral oil).


Seth Lloyd had written a nice paper that spoke about the thermodynamics of computing -

https://www.ncbi.nlm.nih.gov/pubmed/10984064

Also refer - https://www.nature.com/articles/nature13570 by Igor Markov


because CPUs are the least efficient computing device, precisely because they are so general.

The future of hardware will be reprogrammable circuits that specialize to repeatable tasks

ICFP 2018 Keynote Address: The Role of Functional Programming and DSLs in Hardware https://www.youtube.com/watch?v=VqYkcGRr8x0


But why? Most of e.g. a website is the software, not the hardware. Making the hardware cheaper but the software more expensive (e.g. with a DSL) is counterintuitive.


The Feynman Lectures on computation[1] are a physically grounded treatment of the subject and include a section on heat and energy usage.

[1]https://www.scribd.com/doc/52657907/Feynman-Lectures-on-Comp...


Because energy is cheap, and optimizing for efficiency would be a waste of time/money (currently).


Energy is cheap if you're operating a desktop computer that's plugged into the wall.

If you're operating off of a battery, which accounts for a huge amount of consumer computing (especially if phones count), then it starts to matter a lot. Which is why consumer devices tend to be so heavily optimized for efficiency.

If you're operating a datacenter, it also quickly becomes a big part of your costs. Which hasn't necessarily been an immediately pressing concern for Intel, sure, but that may be why companies like Google have been toying with ditching Intel for ARM.


They are called semiconductors after all. Even when fully 'switched' on they are not very good conductors and cause a voltage drop across the junction. That equals heat.


The reason computers use energy is because transistors have a finite signal rise time and are bound by ohms law. This is unlikely to ever change.


Indeed, but smaller transistors (less capacitance, thus less charge transfer) and lower voltages can reduce the power consumption of digital circuits. Also, more efficient software could reduce the amount of switching that has to occur. And I suppose one has to consider the power consumption of some peripherals such as LCD back-lighting.

Long ago I had an internship where we were developing a battery powered microprocessor based gadget. We had a prototype with a knob for the clock speed, and we slowed down the clock until the thing started being noticeably slow. Then we chose fixed components for the same clock speed. That's how we optimized battery life.

Note that my thinking is based on relatively simplistic CMOS processors. I don't know how this applies to higher performance computers in cell phones and laptops, with dynamic memory and other goodies.


Is there such a thing as clockless parts of a cpu that only get io on a signal by signal basis?


Yes, especially in the embedded world, you can shut down portions of the system that you aren't using, or have them only wake up when some sort of event occurs.


Isn't zero also finite?


That’s true and as long as current is coursing through those poorly conducting components there will be resistive losses of some kind. The resistivity of silicon is way worse than copper. Its just that ramp on the way from nothing to everything the resistance is a lot worse. But even at zero for the gate there is going to be some leakage I think.


dna transcription is a computation? bah. any computer uses about 10% of its capital cost in power annually. is that really a lot? sure, if you run it for 10 years, but the real sin there is running such an inefficient computer in year 9. running it for 3 years is a small deal.

sure, we pay a high power cost for reliable, deterministic, digital and synchronous computation. but we designed it that way, for that reason. we don't want unreliable or non-deterministic or approximate or whenever computation.

no question that some practices waste power. but we're doing just fine: power per computation (and probably more importantly, for communicating with memory and other systems) is coming down fast. move along, usual moore's law-like behavior, less to see each year...


We can try to make more efficient computers by using cryptocurrency mining algorithms that help incentivize the development of these energy efficient reversible computers instead of algorithms like SHA-256 which happen to be quite buggy (Some bugs include ASIC-boost and approximate mining) when we use it as a mining problem instead of just a regular hash function (I have developed one of these reversibility friendly mining algorithms myself). Maybe it is better to actually give some thought into the cryptocurrency mining algorithm that you are using before the government bans you for foolishly wasting energy.

-Joseph Van Name Ph.D.


> Also, to foster research on this topic we have built a wiki, combining lists of papers, websites, events pages, etc. We highly encourage people to visit it, sign up, and start improving it; the more scientists get involved, from the more fields, the better!

Thermodynamics of Computation Wiki https://centre.santafe.edu/thermocomp/Santa_Fe_Institute_Col...

HN: https://news.ycombinator.com/item?id=18146854


Where is the date and author of this article? It seems to be missing!?


> By David Wolpert on October 4, 2018

But I’ll take this opportunity to make my usual PSA: any time you are posting about the state of an industry please please date your material


> (We now understand that the second law does not say that the entropy of a closed system cannot increase, only that its expected entropy cannot increase.)

they mean decrease, right?


This is pretty cool from a low powered 144 computer chip.

http://www.greenarraychips.com/


Not really. It's expensive, a dick to program, no FP support and single thread performance is rubbish. A cheap ARM is a better deal now and they are quoted in uA/MHz for performance so you can trade off your performance on the fly.


The GreenArrays chips still use dramatically less energy than ARMs for comparable operations, but it does still appear that they are more difficult to program.


Fun fact: Heat generated by quantum tunneling's leakage current inside your processor's transistors is the main limiting factor for cramming more transistors into a smaller space.

https://spectrum.ieee.org/semiconductors/devices/the-tunneli...


It'd be great if we not only measured software performance, eg the way we measure web servers with requests / sec, but also requests / sec / watt.


> the human brain is a computer

Woah... really makes you think.


I've been thinking about computer energy use for a while now, although I'm no expert in the area. Here's a comparison of nJ per instruction from my notes:

    |                   |          nJ/insn |
    | MSP430            |              0.9 |
    | PIC24             |               2? |
    | 1990s StrongARM   |                1 |
    | LPC1110           |              0.3 |
    | Pentium           |               10 |
    | STM32L0           |             0.23 |
    | Ickes DSP 2008    |             0.01 |
    | Subliminal 2006   |           0.0026 |
MSP430, PIC24: http://www.ti.com/general/docs/lit/getliterature.tsp?baseLit...

StrongARM: http://www.researchgate.net/profile/Kristofer_Pister/publica...

LPC1110: http://www.nxp.com/documents/data_sheet/LPC111X.pdf

Pentium: http://www.newscientist.com/blog/technology/2006/08/explodin...

Ickes DSP 2008: http://www-mtl.mit.edu/researchgroups/icsystems/pubs/confere...

Subliminal 2006: http://web.eecs.umich.edu/~taustin/papers/VLSI06-sublim.pdf 2009: https://web.eecs.umich.edu/~taustin/papers/TVLSI09-sublimina...

This is sort of comparing apples to oranges. The Pentium (all of them) uses wildly varying amounts of power for different instructions, and has 32-bit or 64-bit instructions, with hardware floating point. The STM32L0, LPC1110, and StrongARM are 32-bit processors with no hardware floating point. The MSP430 is a 16-bit CPU, while the PIC24 is an 8-bit CPU. The Ickes et al. device includes a 16-bit FFT accelerator and only runs at 4MHz, but it was only fabricated as a prototype; you can't buy it. The Zhai et al. Subliminal device, also only fabricated as a prototype, only runs at 833kHz, and it doesn't even include an integer multiply instruction, but its somewhat limited ALU is 32 bits.

However, _all_ of these numbers are far from the Landauer bound (kT ln 2). Suppose that we don't make any concessions to reversibility in our CPU design, like Metronome and its successors. A 32-bit instruction, then, erases the 32-bit register where its result is stored, costing 32 kT ln 2, or in some situations, an average of 16 kT ln 2. Supposing T = 300 K, 32 kT ln(2) ≈ 0.092 attojoules. That's _over seven orders of magnitude_ better than the prototype Subliminal processor mentioned above, and nine orders of magnitude better than the Cortex-M0-based commercial processors mentioned above.

Wolpert has published https://arxiv.org/pdf/1806.04103.pdf (mentioned in the article, but not linked AFAICT) which gives better expressions for the cost of computation than the Landauer bound. I haven't finished reading it, but it looks really interesting.


They use a bunch of energy because we can trade power for speed, and entropy is completely irrelevant at 0% of the budget.


At least in part, that's due to servers in DCs not being power managed whatsoever 9 times out of 10


The energy consumed by the US is less than 0.1% of the energy the planet gets from the Sun, so I'm not sure the 5% comparison holds water. Besides, computers return their energy cost many times over.

I do agree with the article's conclusion though: the human brain runs at 20W, on par with a low-power laptop, but arguably the former does much more with that energy.


Just look at transistors needing an amount of current to switch gates. Multiply that by billions


Because of leakage current.


because of all the network IO thats now baked into operating systems?


Irreversibility.

Did I miss something?


Wait until landauer principle gets proved :)


Bitcoin and high end video cards.


software should be more efficient, not computers


Koomey’s law says new computers use less and less all the time :)


Won't this heat up the oceans? causing more of the blob of the pacific?



Because bitcoin and all of the other cryptocurrencies don't mine themselves.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: