
What to do with those idle cores? - wheels
http://cacm.acm.org/blogs/blog-cacm/23833-what-to-do-with-those-idle-cores/fulltext
======
comatose_kid
Instead of doing more work with idle cores, why not spread the work out
amongst them? This might provide a net savings in power consumption...

Given that technologies like speedstep allow the voltage and frequency of a
cpu to be modified dynamically, and that power consumption is governed by P =
C _V^2_ f:

Say we have a CPU with 4 cores with each core equivalent to a 1.6 GHz Pentium
M. According to Wikipedia, the clock frequency on such a device can be stepped
in 200 MHz increments over the range from 1.6 to 0.6 GHz. At the same time,
the voltage requirement decreases from 1.484 V to 0.956 V.

So, say you have a workload that keeps one 1.6GHz core pegged to near 100%.
Instead, why not spread this work to 3 cores, while lowering the speed to
0.6GHz and the voltage to 0.956V?

This would be equivalent to a 1.8GHz machine.

Would this save energy? The original power factor is:

1 core * (1.484^2) * 1.6 = 3.52

The new power factor is:

3 cores * (0.956^2) * 0.6 = 1.645

So, about half the power for this example. Of course, nothing is perfect, and
spreading your workload amongst 3 cores will lose efficiency, and gain
implementation complexity.

~~~
jmtulloss
Of course spreading the work is a good idea, but most apps aren't built for
multiple processors now, and unless they need to be, they won't be in the
future. If an app is fast enough on one core, it won't be optimized for 8.

~~~
wheels
One thing that's interesting there is that toolkits are starting to contain
elements to make parallelization easier.

Qt specifically now has support via QtConcurrent for map and map-reduce
patterns that are automatically farmed out to a threadpool the same size as
the number of cores and it handles most of the hairy bits of doing threading.

I've had to patch it heavily for my own use, but it's still a promising
advance. I suspect we'll see more of the same in toolkits in the future.

------
alexitosrv
I would say to precompute anything the user could run to make things run
faster, however accurate anticipation of user needs is really hard.

I only thought of these tasks:

In a word processing: we can load grammar and nlp engines and download and
analyze trends in web text for improve suggestions in nearly real time, also
trying to understand what the user is writing and aiding to improve not only
orthography and grammar but style too, for example.

In Web search, personalizing the whole searching and browsing experience
downloading and processing partial results and with intelligent assistants
running in the background (this is my favorite).

------
tophat02
I'd say you make a business like Amazon EC2, Windows Azure, whatever, but you
find a way to implement "virtual virtual machines" using the idle resources of
millions of computers, but you couldn't pay each user enough to overcome the
sudden rise in their electricity bills.

------
signa11
will it be possible to have processor cores reconfigured dynamically for
efficient execution of task at hand ?

