

Intel predicts ubiquitous, almost-zero-energy computing by 2020 - mrsebastian
http://www.extremetech.com/computing/136043-intel-predicts-ubiquitous-almost-zero-energy-computing-by-2020

======
hemancuso
This immediately reminded me of when Intel predicted 10GHz processors by 2011
back in 2000

[http://www.geek.com/articles/chips/intel-predicts-10ghz-
chip...](http://www.geek.com/articles/chips/intel-predicts-10ghz-chips-
by-2011-20000726/)

Increasingly lower energy computing is obviously happening, but there are
lower bounds to how little power you need for a useful device, it would seem.
Wireless networking will always require a reasonable amount of power to
achieve any reasonable distance/reliability.

~~~
lukeschlather
Do we not have 10ghz processors? My impression is that they exist, or you can
at least overclock to that level, it's just neither reliable nor economical,
and real workloads that require that much horsepower can by and large be
parallelized, so the difference between quad-core 3.5 ghz and 10ghz is mostly
academic.

~~~
ajross
We have 10GHz digital logic for sure (Thunderbolt drives its differential
lines that fast, I believe). You can do that for a small region of a chip with
a handful of transistors. But you can't run a whole 50+ mm^2 die at that speed
for the simple reason that it can't be cooled by non-exotic methods. You just
can't get the heat out fast enough, so you can't ship the product.

So for now, with ambient temperature cooling, the best modern chips top out at
about 5GHz. That will likely improve slowly with each process generation due
to efficiency improvements, but it's not going to change much.

~~~
Daniel_Newby
If you can run a quad-core chip at 3 GHz, then there it is almost certainly
feasible to run one core at 10 GHz.

I suspect the real limit is in the size of the L1 and predecode caches. I
expect they are limited by propagation delay, not transistor speed. The only
way to get more cache is to use more cores.

~~~
ajross
Heat transfer doesn't scale quite like that. Once the surface area gets beyond
"tiny subcircuit" you're limited by the essentially 1D transfer "up" through
the packaging and can't rely on the ability of the cooler silicon in the
surrounding regions. One core is still a really big circuit.

And yes, caches are big regions and hard to make faster synchronously. But you
can treat this with pipelining -- that's one reason even the 32k L1 data cache
in SNB/IVB has a 4 cycle (!) latency. Note that the even the gargantuan L3
cache on these chips (which is literally 1/3 of the die area) is still running
at full speed, just with a ~30 cycle latency.

~~~
wtvanhest
Lower energy consumption should also help the heat problem. Less waste in to
heat.

------
dmlorenzetti
_Accurately measuring geospatial location via GPS, making a phone call, or
playing a game is meaningful._

Starting with a definition like that, it's not hard to see how the author
concludes that near-zero-energy computing isn't possible. But a more modest
definition-- for example, of compute-enabled "smart" versions of already-
existing products-- may make that vision possible. Thinking of sensors as a
form of meaningful computation expands the range even more.

For example, I've seen switches that harvest enough energy from the act of
pressing them, to communicate their change in status to the controlled device.
Now consider coupling that energy to some logic. For example, maybe a single
light switch dynamically determines which of several lights you want to switch
on or off in the space.

The real barrier to "ubiquitous meaningful low-energy computation" is that the
marginal extra energy for adding computation to an existing device must be
small compared to the energy that device already draws. The classic example of
this is automobiles, which have been acquiring more and more sensors and
internal control logic over the years. As logic components get to lower energy
consumption, why shouldn't those possibilities jump to other devices?

The following, related, analysis was posted to HN a while ago:
[http://www.antipope.org/charlie/blog-static/2012/08/how-
low-...](http://www.antipope.org/charlie/blog-static/2012/08/how-low-power-
can-you-go.html)

------
s_baby
>Looks great, but ignores the fact that transistors don’t scale like they used
to. Remember, the point of near-threshold voltage and the research into
replacing silicon is intended to move the bar forward bit by bit, not to re-
enable the classic Dennard scaling of the 1980s and 1990s. That era is gone,
and nothing short of a miracle material that fulfills all the roles of silicon
will ever bring it back.

Incremental changes in architecture do not have to equate to incremental
changes in capability. Replacing silicon with a different substrate can
introduce new time complexities for old problems. For example, mapping neural
models onto memristors could scale better than mapping same models onto
traditional silicon. Mapping quantum physics models onto qbits will scale
better than mapping same models onto silicon. Mapping protein folding onto
protein based computers could... and so on.

------
ck2
I've wondered if mobile devices will reach the point of such low power
consumption that they can be powered by all the radio wave energy already
enveloping us.

~~~
kmm
I'm afraid that won't be enough. There's a lower limit on how much you can
compute with a given amount of energy. If I recall correctly, switching a bit
will cost at least ln(2)kT energy, or about 3 _10^-21 joules at room
temperature. According to Wikipedia[1], FM signals are minimally 10^-18 watt.
If we have about a hundred antennas and assume a somewhat larger signal
strength, we could thus viably use 10^-15 watt. Sadly, that will only allow us
to switch 400000 bits per second[2], not nearly enough for any meaningful
computation, let alone communication. Notice that I'm not talking about
storing a few hundred thousand bits, I'm talking about switching them from 0
to 1 or vice versa. Adding two numbers will probably switch dozens of bits.
And that is with equipment operating at the limit of what is possible by the
laws of physics. So likely we'll end up a few factors thousand lower. A few
hundreds of bits per second is probably enough for some RFID like system, but
not for anything useful.

1:
[http://en.wikipedia.org/wiki/Orders_of_magnitude_(power)#att...](http://en.wikipedia.org/wiki/Orders_of_magnitude_\(power\)#attowatt_.2810.E2.88.9218_watt.29)
2:
[http://www.wolframalpha.com/input/?i=1+bit+%2F+%28ln%282%29+...](http://www.wolframalpha.com/input/?i=1+bit+%2F+%28ln%282%29+*+boltzmann+constant+*+300K+%2F+%2810%5E-15+watt%29%29)

------
woah
What happened to the magical PixelQi: <http://pixelqi.com/> screens that we
have been promised for the last 3 years? Supposedly completely reflective full
color fast refreshing LCD screens with very low power consumption. Seems that
all that's available is a screen that you have to hack into a very limited
number of netbook models yourself. Low power screens would make a far greater
impact than any advances in low power processing. Can't understand why Apple
or somebody else isn't all over these.

Even if the color reproduction is terrible, imagine the benefits of being able
to write code in the park. We'd be seeing a lot of very tan developers.

~~~
webreac
screens are too power hungry. The evolution is toward glasses that project
light on you eye. The first steps were to remove keys and wires, the next one
is to remove screen and speakers.

~~~
mahyarm
I think it's more the backlight is power hungry, not the display itself.
Direct projected light into the eyes seems to have the risk of probably
causing vision problems.

------
TravisDirks
I think the author may have misunderstood the claim. Intel was probably
referring literally to low power computing (what happens inside Intel's chips)
not to low power any-thing-you-can-do-with-a-computer(display, communicate
with others, etc). In other words the processor.

Nearly all the power and heat problems in processors has to do with impedance
mismatches between materials in the circuit. It's been about 5 years since I
went to a conference called Beyond Moore's Law, but I remember a brilliant
talk on a 5-6 order of magnitude decrease in power that is possible though
impedance matching. (I couldn't find a link online, sorry!)

I suspect (rather arrogantly, since I have not seen intel's article directly)
that this is what Intel was talking about.

------
ripperdoc
Reminds me of this article by SF Writer Charlie Stross:
[http://www.antipope.org/charlie/blog-static/2012/08/how-
low-...](http://www.antipope.org/charlie/blog-static/2012/08/how-low-power-
can-you-go.html)

------
gosub
I remember seeing videos of some lectures by Hal Abelson on a project he was
working on (in scheme of course) about a network of processing units sharing
informations and distributing computations.

------
ktizo
I remember reading a lecture by Feynman where he discusses reversible
computing and suggests that you can get processing down to insanely low power
usage using those methods.

I couldn't find a link to the lecture in question, but here is some recent
research to give an overview - [http://ercim-
news.ercim.eu/en79/special/micropower-towards-l...](http://ercim-
news.ercim.eu/en79/special/micropower-towards-low-power-microprocessors-with-
reversible-computing)

