
Why Do Computers Use So Much Energy? - tshannon
https://blogs.scientificamerican.com/observations/why-do-computers-use-so-much-energy/
======
lucideer
> _Precise estimates vary, but currently about 5 percent of all energy
> consumption in the U.S. goes just to running computers_

30% goes on transport.

> _the human brain is a computer. This particular computer uses some 10–20
> percent of all the calories that a human consumes_

So society's computer uses 5 percent and our own, evolved over millions of
years, uses up to 20.

I'm not saying computers can't or shouldn't be made more efficient, but
relatively speaking, the title of this article is pretty bizarre. Computers
_don 't_ use very much energy compared to almost everything else in the world.

~~~
mikeash
It would make more sense to compare absolute numbers.

A brain uses about 20W. A modern computer has similar consumption, but is many
orders of magnitude less powerful.

~~~
peeters
A single modern CPU can crunch more numbers in a second than every person
living on earth combined. Sure there may be things that it's not capable of,
but most things it _is_ capable of, it's usually 8-10 orders of magnitude more
efficient at.

~~~
JohnBerea
The brain still tops every CPU at visual processing, which may be its highest
throughput interface.

~~~
AgentOrange1234
It’s almost like brains and computers are somehow different.

------
dexen
There's also the question of how _little_ energy is enough to perform
computations, and I've enjoyed greatly a video on the subject by PBS Space
Time:

"Reversing Entropy with Maxwell's Demon | Space Time"

[https://www.youtube.com/watch?v=KR23aMjIHIY](https://www.youtube.com/watch?v=KR23aMjIHIY)

Turns out even in case of an "ideal Maxwell'S Demon", the memory operation
required to handling pass/stop necessitates certain energy use.

~~~
oconnor663
The response to comments at the end is amazing. Subscribed.

------
amelius
> Why Do Computers Use So Much Energy?

Ads.

~~~
zwieback
Also, wide character sets. ASCII is all anyone ever needed.

~~~
bschwindHN
Kind of shocked to be reading this... have you ever made software for non-
English speaking users? ASCII is not enough.

~~~
oconnor663
I'm pretty sure this is just Poe's Law in action.

------
Isamu
>These analyses have provided some astonishing predictions. For example, we
can now calculate the (non-zero) probability that a given nanoscale system
will violate the second law, reducing its entropy, in a given time interval.
(We now understand that the second law does not say that the entropy of a
closed system cannot increase, only that its expected entropy cannot
increase.)

I think they meant to say decrease in the parentheses. But that's tantalizing.
I'd like to hear more about that.

------
pizzakiller85
What i don't understand is how a phone can be so powerful with so little watt
usage. especially compared to a PC

~~~
gvb
1) Your phone is mostly sleeping (extremely low power state). Your laptop also
uses very little power when sleeping. That is why your laptop can sleep for a
week++ but can only run for 3-8 hours.

2) Your phone is not nearly as powerful as your laptop despite similar
specifications of the CPU speeds. Pretty much everything other than the base
CPU clock rate is much, much slower. In addition, your phone CPU down-clocks
itself due to thermal limitations where your laptop has a fan and much better
heat dissipation capabilities. As a result, it is almost never running at the
maximum rated clock speed.

On a YouTube talk I watched (skewering web site speeds and JavaScript
overloads), the speaker's iPhone, with CPU specs similar to his laptop, was 25
times slower than his laptop when rendering a (JavaScript heavy) benchmark
page.

~~~
deathanatos
Sounds a bit like "Progressive Performance (Chrome Dev Summit 2016)":
[https://www.youtube.com/watch?v=4bZvq3nodf4](https://www.youtube.com/watch?v=4bZvq3nodf4)

~~~
gvb
Yes, thanks!

Where he talks about benchmarked speed vs. marketing numbers:
[https://youtu.be/4bZvq3nodf4?t=676](https://youtu.be/4bZvq3nodf4?t=676)

------
DenisM
While the article deals with theoretical limits, there are some practical
limits. As they are already coming up in other comments might as well list
them out:

[...] There are several factors contributing to the CPU power consumption;
they include dynamic power consumption, short-circuit power consumption, and
power loss due to transistor leakage currents: [...]

[1]
[https://en.wikipedia.org/wiki/CPU_power_dissipation](https://en.wikipedia.org/wiki/CPU_power_dissipation)

[2] [https://physics.stackexchange.com/questions/34766/how-
does-p...](https://physics.stackexchange.com/questions/34766/how-does-power-
consumption-vary-with-the-processor-frequency-in-a-typical-comput)

~~~
anon49124
CMOS especially, which consumes energy in direct proportion to the _square_ of
the frequency because a level crossing is a virtual, instantaneous short-
circuit across the chip's power rails.

As such, future chip will likely be driven to reversible / adiabatic tech
because cryptocurrencies, AI and more are guzzling energy ($$$ and running
more circuits) and dissipating more heat that can be sensibly handled in a
given unit volume (per-socket thermal characteristics; chilled water plant
capacity; forced per-cabinet cooling or submersed in mineral oil).

------
godelmachine
Seth Lloyd had written a nice paper that spoke about the thermodynamics of
computing -

[https://www.ncbi.nlm.nih.gov/pubmed/10984064](https://www.ncbi.nlm.nih.gov/pubmed/10984064)

Also refer -
[https://www.nature.com/articles/nature13570](https://www.nature.com/articles/nature13570)
by Igor Markov

------
platz
because CPUs are the least efficient computing device, precisely because they
are so general.

The future of hardware will be reprogrammable circuits that specialize to
repeatable tasks

ICFP 2018 Keynote Address: The Role of Functional Programming and DSLs in
Hardware
[https://www.youtube.com/watch?v=VqYkcGRr8x0](https://www.youtube.com/watch?v=VqYkcGRr8x0)

~~~
why_only_15
But why? Most of e.g. a website is the software, not the hardware. Making the
hardware cheaper but the software more expensive (e.g. with a DSL) is
counterintuitive.

------
User23
The Feynman Lectures on computation[1] are a physically grounded treatment of
the subject and include a section on heat and energy usage.

[1][https://www.scribd.com/doc/52657907/Feynman-Lectures-on-
Comp...](https://www.scribd.com/doc/52657907/Feynman-Lectures-on-Computation)

------
AcerbicZero
Because energy is cheap, and optimizing for efficiency would be a waste of
time/money (currently).

~~~
bunderbunder
Energy is cheap if you're operating a desktop computer that's plugged into the
wall.

If you're operating off of a battery, which accounts for a huge amount of
consumer computing (especially if phones count), then it starts to matter a
lot. Which is why consumer devices tend to be so heavily optimized for
efficiency.

If you're operating a datacenter, it also quickly becomes a big part of your
costs. Which hasn't necessarily been an immediately pressing concern for
Intel, sure, but that may be why companies like Google have been toying with
ditching Intel for ARM.

------
snarfy
They are called semiconductors after all. Even when fully 'switched' on they
are not very good conductors and cause a voltage drop across the junction.
That equals heat.

------
ohiovr
The reason computers use energy is because transistors have a finite signal
rise time and are bound by ohms law. This is unlikely to ever change.

~~~
analog31
Indeed, but smaller transistors (less capacitance, thus less charge transfer)
and lower voltages can reduce the power consumption of digital circuits. Also,
more efficient software could reduce the amount of switching that has to
occur. And I suppose one has to consider the power consumption of some
peripherals such as LCD back-lighting.

Long ago I had an internship where we were developing a battery powered
microprocessor based gadget. We had a prototype with a knob for the clock
speed, and we slowed down the clock until the thing started being noticeably
slow. Then we chose fixed components for the same clock speed. That's how we
optimized battery life.

Note that my thinking is based on relatively simplistic CMOS processors. I
don't know how this applies to higher performance computers in cell phones and
laptops, with dynamic memory and other goodies.

~~~
ohiovr
Is there such a thing as clockless parts of a cpu that only get io on a signal
by signal basis?

~~~
analog31
Yes, especially in the embedded world, you can shut down portions of the
system that you aren't using, or have them only wake up when some sort of
event occurs.

------
markhahn
dna transcription is a computation? bah. any computer uses about 10% of its
capital cost in power annually. is that really a lot? sure, if you run it for
10 years, but the real sin there is running such an inefficient computer in
year 9. running it for 3 years is a small deal.

sure, we pay a high power cost for reliable, deterministic, digital and
synchronous computation. but we designed it that way, _for_ that reason. we
don't want unreliable or non-deterministic or approximate or whenever
computation.

no question that some practices waste power. but we're doing just fine: power
per computation (and probably more importantly, for communicating with memory
and other systems) is coming down fast. move along, usual moore's law-like
behavior, less to see each year...

------
jvanname
We can try to make more efficient computers by using cryptocurrency mining
algorithms that help incentivize the development of these energy efficient
reversible computers instead of algorithms like SHA-256 which happen to be
quite buggy (Some bugs include ASIC-boost and approximate mining) when we use
it as a mining problem instead of just a regular hash function (I have
developed one of these reversibility friendly mining algorithms myself). Maybe
it is better to actually give some thought into the cryptocurrency mining
algorithm that you are using before the government bans you for foolishly
wasting energy.

-Joseph Van Name Ph.D.

------
westurner
> _Also, to foster research on this topic we have built a wiki, combining
> lists of papers, websites, events pages, etc. We highly encourage people to
> visit it, sign up, and start improving it; the more scientists get involved,
> from the more fields, the better!_

Thermodynamics of Computation Wiki
[https://centre.santafe.edu/thermocomp/Santa_Fe_Institute_Col...](https://centre.santafe.edu/thermocomp/Santa_Fe_Institute_Collaboration_Platform:Thermodynamics_of_Computation_Wiki)

HN:
[https://news.ycombinator.com/item?id=18146854](https://news.ycombinator.com/item?id=18146854)

------
csense
Where is the date and author of this article? It seems to be missing!?

~~~
hinkley
> By David Wolpert on October 4, 2018

But I’ll take this opportunity to make my usual PSA: any time you are posting
about the state of an industry _please please date your material_

------
dwighttk
> (We now understand that the second law does not say that the entropy of a
> closed system cannot increase, only that its expected entropy cannot
> increase.)

they mean decrease, right?

------
3rdAccount
This is pretty cool from a low powered 144 computer chip.

[http://www.greenarraychips.com/](http://www.greenarraychips.com/)

~~~
setquk
Not really. It's expensive, a dick to program, no FP support and single thread
performance is rubbish. A cheap ARM is a better deal now and they are quoted
in uA/MHz for performance so you can trade off your performance on the fly.

~~~
kragen
The GreenArrays chips still use dramatically less energy than ARMs for
comparable operations, but it does still appear that they are more difficult
to program.

------
nofunsir
Fun fact: Heat generated by quantum tunneling's leakage current inside your
processor's transistors is the main limiting factor for cramming more
transistors into a smaller space.

[https://spectrum.ieee.org/semiconductors/devices/the-
tunneli...](https://spectrum.ieee.org/semiconductors/devices/the-tunneling-
transistor)

------
mitchtbaum
It'd be great if we not only measured software performance, eg the way we
measure web servers with requests / sec, but also requests / sec / watt.

------
zeofig
> the human brain is a computer

Woah... really makes you think.

------
kragen
I've been thinking about computer energy use for a while now, although I'm no
expert in the area. Here's a comparison of nJ per instruction from my notes:

    
    
        |                   |          nJ/insn |
        | MSP430            |              0.9 |
        | PIC24             |               2? |
        | 1990s StrongARM   |                1 |
        | LPC1110           |              0.3 |
        | Pentium           |               10 |
        | STM32L0           |             0.23 |
        | Ickes DSP 2008    |             0.01 |
        | Subliminal 2006   |           0.0026 |
    

MSP430, PIC24:
[http://www.ti.com/general/docs/lit/getliterature.tsp?baseLit...](http://www.ti.com/general/docs/lit/getliterature.tsp?baseLiteratureNumber=slay015&fileType=pdf)

StrongARM:
[http://www.researchgate.net/profile/Kristofer_Pister/publica...](http://www.researchgate.net/profile/Kristofer_Pister/publication/2955370_Smart_Dust_communicating_with_a_cubic-
millimeter_computer/file/e0b4951e43fbf4b41b.pdf)

LPC1110:
[http://www.nxp.com/documents/data_sheet/LPC111X.pdf](http://www.nxp.com/documents/data_sheet/LPC111X.pdf)

Pentium:
[http://www.newscientist.com/blog/technology/2006/08/explodin...](http://www.newscientist.com/blog/technology/2006/08/exploding-
batteries-silver-lining.html)

Ickes DSP 2008: [http://www-
mtl.mit.edu/researchgroups/icsystems/pubs/confere...](http://www-
mtl.mit.edu/researchgroups/icsystems/pubs/conferences/2008/ickes_asscc2008_paper.pdf)

Subliminal 2006:
[http://web.eecs.umich.edu/~taustin/papers/VLSI06-sublim.pdf](http://web.eecs.umich.edu/~taustin/papers/VLSI06-sublim.pdf)
2009:
[https://web.eecs.umich.edu/~taustin/papers/TVLSI09-sublimina...](https://web.eecs.umich.edu/~taustin/papers/TVLSI09-subliminal.pdf)

This is sort of comparing apples to oranges. The Pentium (all of them) uses
wildly varying amounts of power for different instructions, and has 32-bit or
64-bit instructions, with hardware floating point. The STM32L0, LPC1110, and
StrongARM are 32-bit processors with no hardware floating point. The MSP430 is
a 16-bit CPU, while the PIC24 is an 8-bit CPU. The Ickes et al. device
includes a 16-bit FFT accelerator and only runs at 4MHz, but it was only
fabricated as a prototype; you can't buy it. The Zhai et al. Subliminal
device, also only fabricated as a prototype, only runs at 833kHz, and it
doesn't even include an integer multiply instruction, but its somewhat limited
ALU is 32 bits.

However, _all_ of these numbers are far from the Landauer bound (kT ln 2).
Suppose that we don't make any concessions to reversibility in our CPU design,
like Metronome and its successors. A 32-bit instruction, then, erases the
32-bit register where its result is stored, costing 32 kT ln 2, or in some
situations, an average of 16 kT ln 2. Supposing T = 300 K, 32 kT ln(2) ≈ 0.092
attojoules. That's _over seven orders of magnitude_ better than the prototype
Subliminal processor mentioned above, and nine orders of magnitude better than
the Cortex-M0-based commercial processors mentioned above.

Wolpert has published
[https://arxiv.org/pdf/1806.04103.pdf](https://arxiv.org/pdf/1806.04103.pdf)
(mentioned in the article, but not linked AFAICT) which gives better
expressions for the cost of computation than the Landauer bound. I haven't
finished reading it, but it looks really interesting.

------
Dylan16807
They use a bunch of energy because we can trade power for speed, and entropy
is completely irrelevant at 0% of the budget.

------
baybal2
At least in part, that's due to servers in DCs not being power managed
whatsoever 9 times out of 10

------
utopcell
The energy consumed by the US is less than 0.1% of the energy the planet gets
from the Sun, so I'm not sure the 5% comparison holds water. Besides,
computers return their energy cost many times over.

I do agree with the article's conclusion though: the human brain runs at 20W,
on par with a low-power laptop, but arguably the former does much more with
that energy.

------
m3kw9
Just look at transistors needing an amount of current to switch gates.
Multiply that by billions

------
nerpderp83
Because of leakage current.

------
arthurcolle
because of all the network IO thats now baked into operating systems?

------
pcvarmint
Irreversibility.

Did I miss something?

------
agumonkey
Wait until landauer principle gets proved :)

------
just_myles
Bitcoin and high end video cards.

------
lousken
software should be more efficient, not computers

------
EGreg
Koomey’s law says new computers use less and less all the time :)

------
tekno45
Won't this heat up the oceans? causing more of the blob of the pacific?

------
JazCE
For sending data to china?

[https://news.ycombinator.com/item?id=18138328](https://news.ycombinator.com/item?id=18138328)

------
RightMillennial
Because bitcoin and all of the other cryptocurrencies don't mine themselves.

