Why do you assume ARM-based processors will necessarily have a higher perf/W than x86? This has been a common claim (usually because of the perceived size of the x86 instruction decoder), but Medfield has proven that to be false:
Benchmarks and lies, yada yada. Atom wins on some things (in particular it tends to kick the A9's butt on Javascript benchmarks, which rely on single threaded dispatch and high clock speeds) and loses on others (it's a single core with hyperthreading, where most ARM SoCs are dual core).
Actually depending on the benchmark, the low-clocked Ivy Bridge CPUs tend to do quite well in "performance per watt" vs. ARM SoCs too. They lose big in idle power, but under load those enormous L3 caches and the uOp cache can give them 2-3x the performance per clock of the in-order A9 (and they run about 2x as fast, and draw about 4-10x as much power at peak, so it actually comes out very (!) roughly even).
ARM has a long way to go before they are legitimately competitive in the server space. But Intel still isn't anything more than "broadly competetive with 2-year-old devices" in the mobile world. Over time I'd expect the architectures to converge from both directions, but I don't feel lucky enough to guess at which one will "win".
oh, I'm not claiming that x86 is going to win in mobile or anything like that. My only point is that based on devices shipping today, the choice of ISA does not cause huge power efficiency disparities and that there's no reason to think that will change going forward.
http://www.anandtech.com/show/5770/lava-xolo-x900-review-the...