

Static energy consumption analysis of LLVM IR programs - drjohnson
http://arxiv.org/abs/1405.4565

======
sp332
How can they do this for IR? Wouldn't the actual hardware make a difference in
power usage?

~~~
mvzink
The two main inputs to their model are the IR and the ISA.

------
rainforest
See also: Wattch[1], a similar framework based on a similar idea. Wattch
relies on simulation of the target rather than static analysis, however, but
the application of instruction-level cost models seems the same.

[1] :
[http://www.eecs.harvard.edu/~dbrooks/isca2000.pdf](http://www.eecs.harvard.edu/~dbrooks/isca2000.pdf)

------
phkahler
Can they answer the question "Which architecture is most efficient?"

~~~
cfallin
Nope, the ISA power model is an input to their work.

And on the underlying question, there's been recent academic work on this,
e.g. in [1]. Common wisdom these days (among many people at least) is that the
architecture (ISA) doesn't matter too much w.r.t. power, as compared to the
microarchitecture. The reason is that modern implementations translate
whatever quirks exist in the ISA into a fairly uniform set of underlying
micro-ops; an out-of-order superscalar implementation of x86 and ARM will look
pretty similar. (Go read Bob Colwell's book on P6/Pentium Pro, the first out-
of-order x86, for some more insight on this.)

The conflation of ARM/low-power and x86/high-power is a historical thing: x86
started on desktops and moved downward, while ARM started in embedded systems
and moved upward. It's becoming less true as each heads toward the same target
(mobile). Remaining differences in efficiency are mostly functions of
implementation choices and engineering quality.

Big disclaimer: I worked at Intel after doing grad school in computer
architecture, so I may be slightly biased. :-)

[1] E. Blem et al. "Power struggles: Revisiting the RISC vs. CISC debate on
Contemporary ARM and x86 Architectures." In HPCA-19, 2013.

~~~
hga
Nit: ARM was originally developed for desktops as well, but very price
constrained ones, and the story as I remember reading was that they couldn't
afford the cost of a ceramic package. So they were very careful about power
dissipation, and when the first silicon came back, they found it consumed 1/10
of their design goal.

The low cost and low power made it a natural for lots of embedded designs
following that.

------
VikingCoder
> Using these techniques we can automatically infer an approximate upper bound
> of the energy consumed when running a function under different platforms,
> using different compilers - without the need of actually running it.

Wouldn't a necessary first step be to solve the Halting Problem?

Sorry, I haven't read the actual paper.

~~~
adrianN
Determining the runtime bounds of a function is also undecidable.[1] That
doesn't mean that we can't do it in practice for the things we're interested
in.

[1] [https://cstheory.stackexchange.com/questions/5004/are-
runtim...](https://cstheory.stackexchange.com/questions/5004/are-runtime-
bounds-in-p-decidable-answer-no)

