
The multicore crises: Scala vs. Erlang - iamwil
http://www.infoq.com/news/2008/06/scala-vs-erlang
======
anamax
The first Thinking Machines computer had tens of thousands of very dumb/slow
processors. Their second system had far fewer processrs but they were
significantly more capable.

Almost everyone at the time thought that the second machine was far more
useable. Were they wrong? If so, why?

~~~
wmf
No, they weren't wrong. People like Hillis and now Patterson are thinking
hardware-centric; the most efficient thing to build is many small cores. But
users don't want that. If people bought hardware efficiency, we'd all be using
Transputers, Alphas, and Cells. The real challenge is to figure out the most
efficient design that customers will actually buy.

------
axod
"and there are trends that suggests that will have thousands of cores before
we know it. But each core will be very slow compared to todays cores."

Yeah I don't buy that personally.

~~~
iamwil
I'm guessing the reasoning behind that statement is the power consumption. The
higher the clock rate of a core, the more power it consumes and the more heat
it dissipates. If you don't slow down each core in a crowded multicore, things
might heat up too much and melt down. Until we get some other semiconductors
that can withstand the heat, I'd say that's probably going to happen.

------
quasimojo
this post seems misinformed. in a massively multicore world, _parallelism_ is
more important than _concurrency_ , and the two are not the same. basically
concurrency means the order of tasks is not known a priori. parallelism means
segmenting a problem so subproblems can be solved in many places at once.

in a massively mutlicore world, i offer parallelism is the goal these people
want to address. how they did not mention haskell is beyond me.

~~~
meredydd
_in a massively multicore world, parallelism is more important than
concurrency, and the two are not the same_

I think you're right in a technical sense - that article used the term
slightly sloppily. However, the degree of parallelism in a program is limited
by the ability of the human author to cope with all the concurrent
interactions. So fundamentally, the two words boil down to the same problem.

One approach - that taken by Erlang and (from what I understand) Haskell - is
to force people to write programs in a special way (pure message-passing or
pure-functional), such that they become (almost) embarrassingly
parallelisable.

Another approach - that taken by Scala, and my personal favourite, Clojure -
is to keep the existing paradigm (JVM+threads in both those cases), and
_encourage_ people to write large parts of their program in styles which make
concurrency easier (Actors, STM, immutable values, and so on).

There's something to be said for both approaches: Erlang- or Haskell-style
"purity" keeps you well-behaved, and gives you a lot more parallelism "for
free". On the other hand, there are processes, tasks and other systems which
are fundamentally sequential or mutable, and forcing you into conceptual
backflips to cope with this can create pointless friction.

I don't mean to put one approach above the other - and I'm aware that my
summary is woefully inadequate - but I do believe this article is engaging
with a valid debate, and to pick it apart on sloppy use of a word is to miss
the point.

