

P-complete and the limits of parallelization - neilc
http://blogs.msdn.com/devdev/archive/2007/09/07/p-complete-and-the-limits-of-parallelization.aspx

======
ivankirigin
What is needed is a simple framework to clean inter-process and inter-core
communication and data sharing. It must be non-blocking, fast, and high level,
e.g. use a /networkName/machineName/coreName/processName path to make a
connection.

People talk about the lag in software to be designed for multi-core. They're
right, but that is an ambiguous statement.

I'd love to have this framework in a high-level language where the code is the
configuration, e.g. python modules can be edited live to make/break/configure
connections and streams. To optimize, move around the processes to different
cores and eventually drill down to implement what was in python in a low-level
language.

I'm pretty sure lots of companies have seen this coming for some time, and
have worked on such a tool. Either way, it would make a great startup.

------
mxh
Multi-core's most immediate application won't be to algorithmic
parallelization, which is a ball-breaker of a problem even w/o awkward
theoretical results. It will be to multitasking, VMs, etc. - scenarios where
the utility of extra cores is pretty easy to see.

Those scenarios make the most immediate sense inside the data center. I think
part of the push towards SaaS and web-based applications will come from the
fact that Moore's law will begin to apply disproportionately to hardware used
to serve those solutions. Soon, the same CPU might be able to run either one
desktop or 50 virtual servers, making servers cheap relative to desktops.

------
rms
How many cores do we need before we start really showing the limits of
parallelization? Is anyone educated enough to make a ballpark guess for when
the diminishing returns will stop rapid progress in parallelization?

We can't know because we can't prove P=NC or P!=NC.

~~~
ivankirigin
It's highly task dependent, making the "theory" pretty useless, no?

Hell, we could even say it's multi-task dependent as any real, deployed system
won't be trying to complete a single task.

~~~
rms
So theory aside, multi-core processors should be good for at least 20 years?
And that's if we don't come up with anything better.

~~~
ivankirigin
What do you mean by "should be good"? Things can still improve if the hardware
configuration remains the same.

And I have no idea how long the trend for increased horsepower will continue.

My hope with my comment was to get away from prognostication and reducing a
problem to a simple number. But perhaps one take away is that we should use a
suite of real programs to test hardware configurations. Benchmark routines are
pretty useless when hardware optimizes for the routine, which has obviously
been happening for years.

A seti-at-home style system that noted top and your system configuration would
be useful. Deploy it to a few hundred computers and see how things perform in
the aggregate with increasing quality hardware.

