This is regarding slides by Rob Pike with above title. Every time I go thru this I feel like a moron. I'm not able to to figure out the gist of it. It's well understood that concurrency is decomposition of a complex problem into smaller components. If you cannot correctly divide something into smaller parts, it's hard to solve it using concurrency.
But there isn't much detail in slides on how to get parallelism once you've achieved concurrency. In the Lesson slide (num 52), he says Concurrency - "Maybe Even Parallel". But the question is - When and How can concurrency correctly and efficiently lead to Parallelism?
My guess is that, under the hood Rob's pointing out that developers should work at the level of concurrency - and parallelism should be language's/vm's concern (gomaxprocs?). Just care about intelligent decomposition into smaller units, concerned only about correct concurrency - parallelism will be take care by the "system".
Please shed some light.
Slides: http://concur.rspace.googlecode.com/hg/talk/concur.html#title-slide
HN Discussion: http://news.ycombinator.com/item?id=3837147
In Go, it is easy to create multiple tasks/workers each with a different job. This is implicitly parallelizable - each task can (but doesn't have to) work within their own thread. The only time when the workers can't run in parallel is when they are waiting on communication from another worker or outside process.
This is opposed to data level parallelism where each thread is doing the nearly exactly same instructions on different input, with little to no communication between the threads. An example would be to increase the blue level on each pixel in an image. Each pixel can be operated on individually and be performed in parallel.
So - the push is for more task-based parallelism in programs. It is very flexible in that it can run actually in parallel or sequentially and it won't matter on the outcome of the program.