

The Trouble with Multicore - alexviktor
http://spectrum.ieee.org/computing/software/the-trouble-with-multicore

======
10ren
1\. Perhaps we need to actually have the hardware before we can start playing
with it, and come up with solutions. That's how it usually goes.

2\. Maybe they are jumping ahead to "many-core". Instead ask: what can we do
with this extra silicon? In the past this lead to cache; pipelining; a faster
multiplication technique; on-board maths "co"-processor. Today, it gives us
systems-on-a-chip; hardware video-decoding; absurd GPUs; and even physics PUs.

I spoke to the person who developed that multiplication technique, and he said
that it was due to extra silicon being available - it was inconceivable before
then, in that people did not conceive of it because they _could_ not conceive
of something so wasteful of silicon.

------
wmf
Counterpoint: The solution is multicore.
<http://www.darrylgove.com/2010/06/solution-is-multicore.html> "1. Many
problems just need parallelising once. 2. Given a number of cores, it is
almost always possible to find work to keep them busy. 3. There are very few
day to day tasks that are actually limited by processor performance."

------
dkersten
Repost of <http://news.ycombinator.com/item?id=1477776>

My response to this still stands: the problem is an education problem, little
more. Read the full response here:
<http://news.ycombinator.com/item?id=1478331>

------
Nwallins
The "trouble" is only when you want a single process to monopolize the
entirety of CPU resources. In today's (and even yesterday's?) processing
environment, there are often tens if not hundreds of processes running at
once, and 5 or 15 of them may be "performance critical".

Are multicore architectures not able to distribute this sort of load?

~~~
wmf
On PCs it's common to have hundreds of processes _sleeping_ , but rare to have
more than one or two threads _runnable_.

------
whyenot
How many different things can a human being think about at once? Maybe 5? 10?
probably not an order of magnitude more than that. Maybe part of the problem
is that we are running into some limitations of the human brain. Education may
help, but maybe we should also take care that we aren't trying to jam round
pegs into square holes.

~~~
InclinedPlane
Software development passed outside the realm where a single human brain could
fully comprehend the entirety of the project at every level of detail about 5
decades ago. The question is not whether abstractions are required to allow
human brains to deal with threading, that's a given, the question is whether
or not we have the right abstractions.

------
SeriousGuy
Thats due to the fact that C++ is still without a standardized thread library,
which works across all platforms. Thus new students who start by C, C++ dont
have proper exposure to the concepts of writting multi threaded applications.
I guess it is single biggest argument against C++

~~~
shin_lao
This is untrue. First there is a stable boost::thread library available for a
while, and second threads made it to the standard as well as futures.

I don't think the problem is the lack of a thread library, concurrency
requires more than just being able to spawn and wait for threads.

Things like tasks managers, concurrent containers and transactional memory
wrappers are better answers to the problem.

~~~
SeriousGuy
As someone who learned C++ few years back let me tell you that professors in
Universities actively discourage use of Boost.

There is C++0x or which contains hashset and hashtable but even use of it is
discouraged.

I think C++ standardization effort has become a joke and I doubt when we will
see them being used in real life.

Even funny are people who claim to be waiting for emergence of D.(my professor
said that he was waiting for andrecue or some one to finish his D book)

~~~
ewjordan
_As someone who learned C++ few years back let me tell you that professors in
Universities actively discourage use of Boost._

I'm curious - why is this? Do they just not like students leaning on libraries
at all, or is it something specific about Boost? I suppose supporting an
entire classroom of students trying to install/build the thing across a dozen
different operating system versions might be something that a professor would
want to avoid, but this is a general problem with most tools, so I don't think
it should be such a show stopper...

For me, Boost is to C++ as Apache Commons is to Java: literally the first
thing I drop in to any project where I'm going to be doing heavy lifting. To
some extent, these two mature and highly useful libraries (I guess I should
say collections of libraries, really) are the main things the corresponding
_languages_ have going for them.

C++ without Boost is just an exercise in pain and humiliation, IMHO...

~~~
SeriousGuy
I wish my professor would listen, this is the course i took
[http://www.ecs.syr.edu/faculty/fawcett/handouts/webpages/CSE...](http://www.ecs.syr.edu/faculty/fawcett/handouts/webpages/CSE687.htm).
The worst part was that we were supposed to parse C++ code without use of any
standard grammar system. even regexp werent allowed. We only got a tokenizer.

~~~
ewjordan
Yowza, Microsoft all over the place in that course...I weep for the commuter
student that runs OS X at home and has no desire (or disk space) to dual boot
Windows.

That said, the restriction against Boost may be primarily to ensure that
people don't rely too much on its features, which would make a lot of those
projects almost trivial to solve without actually demonstrating mastery of the
underlying concepts. So I suppose it's a worthwhile exercise, but it does
bring attention to the main problem with many CS courses, which is that
there's never enough emphasis on reusing what other people have already done,
whereas when you're programming outside academia, the first thing you should
always do is check if someone's already written the code you're about to waste
a week on...

------
api
Yawn. Another "OMG programmers can't handle multicore!" bullshit article.

I do threads all the time, and there are lots of things out there like various
MapReduce toolkits that make it even easier than dealing directly with threads
if you want. Threads are not that hard. It just takes an understanding
followed by some practice to get a sense for it.

~~~
tptacek
Disagree. Writing programs that are correct with multiple threads is harder
than writing single-threaded program (remember that if you're motivated enough
to _post_ about it, you're in the top 0.1% of programmers).

But that aside, writing programs that are truly faster and, more importantly,
that scale reasonably with the number of threads, compute resources, other
resources, &c is far harder. Have you profiled your code? Have you had the
experience of untangling serializations around locks? Have you had the
experience of having to custom-code your own sync primitives because locking
overhead started to kill you? Are you graphing performance over number of
threads under stress? I've worked on projects like that, watched the curves
flatten (and sometimes dip), and I don't think scalable multicore code is
anywhere nearly as simple as "man pthread, and remember to lock and unlock in
the right order".

I'm not trying to say you don't know what you're talking about. I'm saying
that you're underestimating the amount of work that goes into making fast
multithreaded designs work; you may do all this stuff without even thinking
about it, but you have to remember that this is work that you don't have to do
at all in "normal" nonscalable designs.

~~~
aristus
I've a possibly silly question: since cache misses can dominate performance
even in the single-core case, why don't they use all that silicon to make one
core with craptons of on-die cache?

~~~
tptacek
Presumably because at a given level of performance L2 cache size and die
size/complexity don't have favorable scaling properties. You can ask this
question to Google almost verbatim and find articles talking about SRAM ports
and stuff.

The beauty of multicore scaling is that it scales.

------
francoisdevlin
Insert FP rant here

