
Concurrency is not Parallelism (it's better) - A talk by Rob Pike - luriel
http://concur.rspace.googlecode.com/hg/talk/concur.html#title-slide
======
dons
great to see this coming from non-functional language folks (after years of
banging out about it in Haskell land
-[http://ghcmutterings.wordpress.com/2009/10/06/parallelism-
co...](http://ghcmutterings.wordpress.com/2009/10/06/parallelism-concurrency/)
)

Of course, concurrency is not the end of things. Deterministic parallelism is
a beautiful model for making things faster, easily. So you need that in your
language somehow too.

Edit: there's a cite for Bob Harper on this,
[http://existentialtype.wordpress.com/2011/03/17/parallelism-...](http://existentialtype.wordpress.com/2011/03/17/parallelism-
is-not-concurrency/) , which was a bit later than Simon's post.

~~~
bascule
The process isolation model that CSP provides is generally useful within
languages with mutable state.

In particular, the Kilim library for Java uses "microthreads" (similar to
goroutines) plus a linear ownership transfer system such that only one
microthread owns a given object/object graph at a time, preventing concurrent
state mutation while still providing mutable state. Sending an object from one
Kilim task to another transfers ownership, and thus messages become the
mechanism by which state is isolated at the process level.

Mutable state can be safe too. You just need the proper abstractions.

~~~
Peaker
This sounds like runtime safety, not compile-time safety. Otherwise, how can
you avoid aliasing that modifies the object for which you lost ownership?

Immutable state gives you compile-time-safety. I think when people say "shared
mutable state is unsafe", at least some of them are referring to compile-time
safety. If it compiles, these kinds of concurrency-related bugs are
impossible.

~~~
wmf
At least in Rust, unique pointers should be guaranteed by the compiler so when
you pass one to another thread you know there's no aliasing.

------
mahmud
Robert Harper made the distinction between concurrency and parallelism in a
paragraph (freely available text to boot!)

" _Parallelism should not be confused with concurrency. Parallelism is about
efficiency, not semantics; the meaning of a program is independent of whether
it is executed in parallel or not. Concurrency is about composition, not
efficiency; the meaning of a concurrent program is very weakly specified so
that one may compose it with other programs without altering its meaning. This
distinction, and the formulation of it given here, was pioneered by Blelloch
(1990). The concept of a cost semantics and the idea of a provably efficient
implementation are derived from Blelloch and Greiner (1995, 1996a)._ "

39.5 Notes, Practical Foundations of Programming Languages, Version 1.30
Revised 03.15.2012

<http://www.cs.cmu.edu/~rwh/plbook/book.pdf>

~~~
javert
Since being schooled by other HN readers on parallelism vs. concurrency in the
past, my way of distinguishing it:

Concurrency is a programming language issue, parallelism is a resource
allocation issue.

EDIT: Well, this appears to be stated pretty clearly in the presentation, but
I only got the presentation working after trying it for the 3rd time (in the
3rd browser).

------
Cushman
I'm loving the gopher drawings-- although I found it hard to maintain interest
after the gophers were gone, even though the material still made sense.

We need more cute illustrations of CS topics.

~~~
stcredzero
_why?

~~~
sophacles
There are several types of learning, and some people are better at (or prefer)
one over others. Some people do well with text. Others do well with pictorial
representations, other do better with demonstration and talks and some learn
by doing/failing/doing again.

I personally am good with pictorials and the do/fail iteration styles of
learning. Text is passable, but I am usually translating it to pictures in my
head. There have been times where I have gotten more out of one crappy but
representative picture than I have out of an hour lecture. Good pictures that
engage the viewer are even better. In this case I was easily able to imagine
the gophers walking along doing their tasks, and just "seeing" the flow
because the picture was pretty representative. I could have gotten the same
message from boxes and arrows, but not as quickly I think.

So why should we have more good diagrams? Because it would help more people of
a certain type learn to program easier.

~~~
ori_b
I think he meant _why: <http://en.wikipedia.org/wiki/Why_the_lucky_stiff>

~~~
sophacles
Oh. Well, either way, the illustrations in the ruby book are just commics as a
cute way to make side points, but the pictures themeselves don't actually
convey information in the same way as the gopher diagrams in the talk. Please
feel free to reinterpret my comment retroactively to be about how that is the
case :)

------
warmfuzzykitten
Didn't do readers any favor by pointing at the title slide, where controls
need to be discovered by accident. A better link is the unfestooned

<http://concur.rspace.googlecode.com/hg/talk/concur.html>

~~~
read_wharf
Well that's the weirdest thing. I've seen this kind of thing before, so I
guessed that <\--> would work. But the page asks my browser to store
information. If I ignore the question and just allow the query to continue to
display at the top of the window, I can navigate fine. If I say "Not now," I'm
stuck on whatever page I was on when I answered, and have to reload to be able
to move again. Ubuntu 11.10, FF recent.

------
dalore
The gophers remind me of Dwarf Fortress. Moving stuff around in piles and
doing it in an efficient manner so your fortress has no bottlenecks.

So for a good lesson in concurrency and parallelism, play Dwarf Fortress.

~~~
micaeked
try spacechem

------
ww520
On the "Goroutines are not threads" slide, it says, "When a goroutine blocks,
that _THREAD_ blocks but no other goroutine blocks."

I hope that's a typo. A thread is blocked when a goroutine blocks would
exhaust the worker thread pool pretty fast. Can an GO expert clarify on that?

~~~
masklinn
Goroutines are threads though (in the sense of preemtively scheduled
concurrent execution sharing memory), they just aren't OS threads.

~~~
skelterjohn
But they aren't "preemptively scheduled concurrent execution sharing memory".

They are "cooperatively scheduled concurrent execution sharing memory".

~~~
4ad
The current implementations are cooperatively scheduled, but nothing in the
specification prevents a preemptively scheduled implementation. After all,
gccgo had a preemptively scheduled implementation until a few months ago.

That being said, I prefer my goroutines to be cooperatively scheduled.

------
jowiar
Ted Stevens had it wrong: The internet is a series of gophers!

More relevantly - definitely going to come up with more stories/silly pictures
for presentations in the future. It channels the memory technique of making
ridiculous associations that actually stick.

------
espeed
I've stared to dive in to Clojure over the last week, and I have been looking
at how to implement an async WebSocket server.

It seems the Clojure way is to use something like Aleph
(<https://github.com/ztellman/aleph>, [http://blip.tv/clojure/zach-tellman-
aleph-a-framework-for-as...](http://blip.tv/clojure/zach-tellman-aleph-a-
framework-for-asynchronous-communication-4899245)), which is a library that
implements channels like Go; however, Go's channels are first-class constructs
built into the language.

~~~
yoklov
To be fair, with lisps there's no real difference between a feature from a
library and a feature provided by the language.

------
rollypolly
But what are we to do in a world built, at its lowest level, on imperative
languages (C) without intuitive support for concurrency nor parallelism?

Does Google foresee a future entirely built in Go, the same way NeXT expected
the world to evolve towards Objective-C?

~~~
tesseractive
Why does the whole world have to be built that way? If, the next time you do a
new project start on a web app, your web app were built this way, then you
would see the advantages of this approach in your web app.

If there are equivalent ways of designing for concurrency in Java or C# (or
those ways are added to the language in the future), then you may be able to
restructure existing Java and C# code bases to take advantage of this
approach.

If you have 20 million lines of Cobol transaction processing code and there's
no practical way of migrating your code base to use this methodology, then
your 20 million lines of Cobol will keep running the same way they always
have.

Unless I'm missing a point you're trying to make (certainly possible) there's
no reason to think that this is an all or nothing approach.

~~~
zmj
C# 4.0 actually has all the pieces you'd need to write Go-style concurrent
code. It just won't be pretty.

The upcoming 4.5 release fixes some of the ugly.

------
nivertech
He could've described SIMD with gophers:

 _putting only one manual per cart is SISD vs. putting several manuals per
cart which is SIMD_

Also his example (moving manuals) is really example of _parallelism_ and not
_concurrency_. He has one large problem, which is decomposed to two smaller
problems. Not much different from multiplying two large vectors.

The real example of _concurrency_ would be: librarians handling books returned
to the library. They don't know how many books will be returned and when -
i.e. very similar to web server handling HTTP requests.

------
nivertech
I don't like his definitions of these terms. IMHO these definitions are
better:

Concurrency

 _property of systems in which several computational processes are executing
at the same time, and potentially interacting with each other_

Parallelism

 _computation in which many calculations are carried out simultaneously,
operating on the principle that large problems can often be divided into
smaller ones, which are then solved concurrently (i.e. "in parallel")_

<http://www.slideshare.net/nivertech/migrationtomulticore>

------
chaostheory
I think how you think of both concurrency and parallelism depends on what you
use to achieve both. When you use akka, a an actor model library for scala and
java (<http://akka.io/>) doing both feels identical.

------
rkuester
What software produced this slide presentation?

~~~
luser001
If you didn't know about it, you might want to look at asciidoc:
<http://www.methods.co.nz/asciidoc/> (search for slidy)

------
seunosewa
You can't expect people to take your new "concurrent" language seriously if
their concurrent programs won't run proportionately faster on multicore
systems. Channels & green threads are nice, but you can get true parallelism
with Java. Don't make excuses for your language. Just solve the problem!

~~~
masklinn
This and that are different, languages like Go and Erlang use m:n mapping to
map their m lighweight threads/processes onto n kernel threads (where n is
usually the number of physical cores on the machine) to benefit from hardware
concurrency.

But the presentation is about concurrency and parallelism not being the same
thing, and more concurrency not necessarily leading to more parallelism.

> Don't make excuses for your language.

There's no excuse made.

> Just solve the problem!

There's no problem to solve. If you're asked to count from 1 to 100, you can't
parallelize it: it's a purely sequential task, each step has a hard dependency
on the previous step. So there is no possible parallelism.

But it is possible to do something else at the same time, or to have one
process handling the counting and sending the current count to an other
process printing it. That's concurrency: you've got two things running at the
same time.

~~~
0xABADC0DA
Single-threaded concurrency was never the problem to begin with and it's used
everywhere, in specific forms. Even C has it... for instance in your example
printf formats to a buffer and writes at some later time, which is concurrent
in that the next numbers can be created and formatted before the first is
printf operation is complete.

But the reason almost nobody uses any of the general forms of this concurrency
(coroutines, fibers) is because the general case of this is as useless as it
is easy to do. For instance there are plenty of C coroutine libraries
(including Russ' libtask) and they work fine. The reason nobody uses them is
the because the situations where this is called for are precious few.

The other day on reddit Ian of gccgo even butted into a conversation about 1:1
threads vs m:n threads, but could not muster up an answer as to what
conditions m:n threads (ala goroutines) would be called for. Simultaneous
execution (threads for example) was always the hard part and any talk about
concurrency that isn't about simultaneous execution is just tilting at
windmills.

~~~
masklinn
Not sure what you're trying to say, m:n means simultaneous execution of
concurrent tasks for any n > 1.

~~~
0xABADC0DA
The point is that it is threading/simultaneous execution that is the big deal.
A language designed around some 'concurrency not parallelism' slogan has
missed the boat... concurrency was never the problem that needed to be solved.

For instance in golang the only support the language has for simultaneous
execution is a threadsafe queue -- that's all. And the runtime libraries only
have variations on mutex that you even have to manually create locks and
manually remember to unlock them. This is _extremely_ weak sauce for a
'concurrent' language.

