

Who Wants Parallel Computers? - gruseom
http://rjlipton.wordpress.com/2010/11/21/who-wants-parallel-computers/

======
RiderOfGiraffes
I've done research in writing tools to make parallel machines easier to use
for non-(computer specialist)s. The consensus is still, largely, that you
can't do it. There are special cases, and there is library code, but for most
people, they haven't a clue how to write code that can run reliably on
multiple processor systems.

I think almost everyone involved in the industry knows that we want faster
uni-processors, and that we don't want multi-core machines. We're pretty sure
that some problems that do have fast serial algorithms don't have efficient
parallel algorithms (this is akin to P vs NP type questions, and is beyond my
current knowledge) and these problems are stalled at uni-processor speeds.

Research continues, progress is made, and most people don't care because most
people just write emails, use facebook and browse the web. Heavy computing
with parallel or multi-processor machines is hard in general, and that's not
changing fast.

I don't expect it to. People will simply have to learn how to devise parallel
algorithms, and write good, parallel code.

~~~
JoachimSchipper
I'm not so sure that multicore programming will be the future.

In the last couple of years, we have seen the rise of the netbook, the
smartphone, and languages like Python and Ruby. Each of these are, apparently,
"fast enough" to be highly succesful.

Applications are also changing: casual games, Farmville and even World of
Warcraft have very modest system requirements but make a ton of money. Web
applications are _awful_ from a performance perspective: they are crippled by
network latency and need a single server to do the work for many people.
Still, they are "the modern way" to write an application. Again, the
performance is "good enough".

Finally, even when performance _is_ needed, there are different models for
parallel programming than multicore. GPUs are very fast data-parallel "single
core"-like devices. At the other end of the spectrum, real-world "web scale"
systems scale horizontally and are thereby forced to adopt a multi-process
model.

Yes, our OSes should be written for multicore, the newest Doom/Quake should be
written for multicore, and our numerical models should be at least aware of
multicore; but despite the fact that the C programmer in me will likely be
happier in your multicore world than in the world I sketch above, I think most
programmers will live in the latter.

~~~
scott_s
_GPUs are very fast data-parallel "single core"-like devices._

I don't think that's accurate - being "single core"-like. Besides literally
having multiple execution cores, GPUs are radically data-parallel in ways that
even those already familiar with data-parallel code (using languages such as
OpenMP) must still adapt their thinking. Code that extracts high performance
out of GPUs must be aware of the memory hierarchy and degree of parallelism -
you can't pretend you're doing sequential programming.

~~~
JoachimSchipper
It's definitely not your old CPU - but it's a _very_ different model from the
threaded programming common with multi-core programs.

But I agree, "single core" isn't really true.

------
scott_s
_Essentially he was explaining the physics behind the collapse of Moore’s Law
and the rise of many-core systems._

Moore's Law still holds; it is about the number of transistors on a chip, not
processor frequency. The problem is that the way that we achieved better
performance for a long time was to increase sequential performance, which
meant: higher clock frequency, larger caches, longer and more elaborate
instruction pipelines to extract instruction level parallelism. _That's_ what
is breaking down, not Moore's Law.

That we're now exploring multicore design is _because_ of Moore's Law: our old
approaches for improving performance no longer work, but we're still getting
more transistors to work with.

------
swolchok
I'm a little confused about what point the article is trying to make. I don't
think that the many-core folks have ever said that they're building many-core
because people want it; I've consistently heard that it's because of Moore's
Law collapse, as admitted in the article.

If it were economical to produce faster uniprocessors instead, you can bet
that chip companies would do it, because they'd make more money that way.

~~~
fauigerzigerk
Reminding us that parallelism is not our original goal is just the prelude to
casting doubt on the inevitability of the breakdown of Moore's law for unicore
processing. That's how I interpret the article.

------
jedbrown
Doubling the clock rate without also doubling memory bandwidth and halving
latency is likely to bring less than a 71% speedup on that chess program.

------
darwinGod
I think a major problem there are many parallel-programming models , which
either do not make big news outside the academic world, or are deemed not
suitable for industry(CUDA folks- not talking about you!).

Like Matt Welsh(the Harvard prof who left for Google) mentioned on his blog
recently.. how many people risk running OpenMP in production, and watch some
nodes fail..

Even the "Big" guys (Google,Facebook) seem to be risk-aversed to thread-level
parallelism to earn their bread -BigTable,Hadoop,Cassandra are centered on
multi- process architecture.. and basically built for horizontal scaling- add
more nodes,assume they fail.

Where is this push towards parallel machines coming from? Not really from
consumer web - Or any consumer oriented service that an entrepreneur can gain
value, by utilizing multi-processor systems.

It's the organizations with big bucks that call the shots, or drive
innovation. Like NCEP/NOAA in weather prediction. Not the guy who moves to the
Bay area who wants to hack something, get traction in user base, get
funding,and finally mint money :-)

The demand for writing consumer facing software for multi-processor systems is
simply too less now.

------
wladimir
I want parallel computers, a lot of them!

Brains are big parallel machines and it seems inevitable that computers also
follow the path to massive parallelism that evolution took long ago.

~~~
JoachimSchipper
Why? Both the hardware and the "programming" seems quite different.

~~~
wladimir
One simple answer: scalablity.

