
Concurrency Is Not Parallelism (2013) - amelius
http://blog.golang.org/concurrency-is-not-parallelism
======
rubiquity
I've had the most success with describing concurrency and parallelism as
follows:

The term "Parallelism" these days is synonymous with "CPU Parallelism."
Parallelism has existed for a long time. For example, network cards have been
able to do I/O parallelism for seemingly forever. Could you imagine if a
computer could only have a single network request running at a time?
Therefore, what parallelism (read: CPU Parallelism) means is _CPU execution
happening at literally the same time_. Emphasis on _literally_.

Concurrency is how we make sense of and _communicate_ the above mess. In the
days of single cores, we mostly cared about concurrency as a way to
communicate I/O parallelism in a manageable way. Now we use concurrency to
express both forms of parallelism in our programs. Good implementations of
concurrency (Erlang) make all of this transparent to the user. Bad
implementations of concurrency (Node.js) have the user doing all of the work
(callbacks).

~~~
fokinsean
I am not well versed in Erlang, but have dabbled in some Node. What does
Erlang do that makes it better than Node? As far as concurrency is concerned.

~~~
vezzy-fnord
Pretty much everything, honestly. Erlang was built from the start to
accommodate massive concurrency and robustness requirements, in particular for
telecom switches, but largely applicable to anything.

Node's a single-threaded reactor pattern event loop, whereas Erlang/OTP is an
SMP-enabled virtual machine with a preemptive scheduler that manages
lightweight actors called processes, all isolated from each other with their
own stack and heap, that can be cheaply spawned and communicate only via
message passing. On top of that, Erlang offers location transparency for
processes and easy node distribution out of the box. OTP then provides a
framework for fault tolerance and reusable components by supplying generic
behaviors (server, FSM, etc.), supervisors for monitoring and restarting
processes when they crash, endpoints for hot code reloading, and more.

You should try it one day. There's really nothing else like it, even if its
underlying concepts aren't necessarily novel - but they're well combined and
largely obscure in mainstream practice. A lot of platforms have tried to
implement their own equivalent actor model approaches (Celluloid for Python,
Akka for Scala...), but none reach the same level.

~~~
fokinsean
Thanks for the overview! I have seen a lot of hype around Elixir lately, do
you happen to know if it contains all of the benefits of Erlang? Coming from a
Ruby background, Elixir looks very nice to work with syntax-wise compared to
Erlang.

~~~
rubiquity
You have all of the benefits of Erlang the language, Erlang the VM and Erlang
libraries. Elixir adds hygienic macros, an awesome package manager (hex),
task/test runner and build tool (mix). Elixir also has a Ruby-ish syntax, but
I leave it up to the reader to make their own opinions of Elixir syntax vs
Erlang syntax as that is not a war I'm interested in.

------
_halgari
The best description of concurrency vs parallelism I've heard is from "Clojure
for the Brave and True" where it's described via a Lady Gaga song, pure
genius:
[http://www.braveclojure.com/concurrency/](http://www.braveclojure.com/concurrency/)

------
chipsy
When I actually looked up these terms I found no strong, exact agreement on
meanings.

However, of the definitions I did find, the ones referenced in the Clojure
documentation seemed most reasonable - from memory, something like the
difference between coordination of simultaneous tasks, and their execution.

This definition allows for the existence of concurrent systems without formal
concurrency primitives - every action game deals with race conditions even
though the gameplay loop is typically single threaded, because it has to code
apparently independent agents against a global shared-state environment, and
the order in which these agents process data matters. In most instances there
is no crisis for the game if the situation is left poorly specified - it may
slightly bias some elements or defer events by an additional frame - but for
certain things it can blow up into a showstopper bug with physics or cutscene
triggers, or harm responsiveness.

This correlates with a common trope from students who encounter the agents
problem for the first time - they try to run each AI on a thread, further
pushing away their ability to direct concurrency, while "gaining" parallel
performance(not really, of course).

~~~
derefr
I would phrase it as:

Concurrency is a possible feature of an abstract machine, exposed as a
programming-language-design paradigm. A language can target a platform that
"has concurrency", and this heavily shapes the language. You can end up
"coding Go/Erlang in C"—writing concurrent code in your head for a concurrent
abstract machine, then translating it to your target language.

Parallelism, meanwhile, is a VM/runtime/platform implementation detail.

------
ww520
You can have concurrency with one CPU but you can't have parallelism with one
CPU.

Concurrency is the ordering and dependency (or lack of) between two (or more)
paths of executions. Parallelism is the doing of work in parallel. They
describe two different things. A program can be either concurrent only,
parallel only, or both, and there's where the confusion come in.

~~~
_greim_
> you can't have parallelism with one CPU

What about multithreading? If a single core switches between tasks I can see
that not being true parallelism, but what about multi core?

~~~
ww520
Multithreading on a single CPU doesn't have parallelism. You are just
switching between tasks very fast serially.

A multi-core CPU for all intents and purposes has multiple CPU's. For the
discussion on concurrency and parallelism, the distinction is on one execution
unit vs multiple execution units.

~~~
lloeki
> Multithreading on a single CPU doesn't have parallelism. You are just
> switching between tasks very fast serially.

Nitpick: SMT[0] (implemented as Hyper-Threading by Intel) is in fact
parallelism for threads on a single CPU.

[0]:
[http://en.wikipedia.org/wiki/Simultaneous_multithreading](http://en.wikipedia.org/wiki/Simultaneous_multithreading)

------
bottled_poe
I saw this image [1] a while ago. At first I thought it was clever, implying
that multithreaded programming often seems like chaos in practice. But
actually, that chaos is a result of the nondeterministic nature of concurrent
processing. Each thread is doing it's own thing as quickly as possible.
Specifically, it is important to note that concurrent threads cannot be
expected to perform at equal speed.

In this particular analogy, if the goal is to consume the food as quickly as
possible, this may be the most efficient solution.

1\. [http://i.imgur.com/iDVTstR.jpg](http://i.imgur.com/iDVTstR.jpg)

------
msie
Concurrency: you have at least two threads sharing control of one CPU. Only
one thread runs at a time. There is no parallelism. The important thing is
that multiple threads are running.

Parallelism: the two threads are running simultaneously on two CPUs. There is
concurrency too since there are multiple threads.

~~~
kazinator
The two situations you describe are more accurately called multitasking and
multiprocessing. The latter gives us the "MP" in "SMP".

> _Concurrency: [...] There is no parallelism._

> _Parallelism: [...] There is concurrency too since ..._

Oops, contradiction! :)

~~~
msie
Ya got me there. :)

Edit: I should have written more clearly that having concurrency doesn't imply
parallelism.

------
mkehrt
Oh, look, another group of people making another arbitrary distinction between
these two words, which is shared by no one else.

In grad school, we used concurrency to mean interleaved, single processor
threading, while parallelism meant multiprocessor.

I've heard other people use this distinction to mean fork-join style
concurrency vs. shared memory parallelism.

Can we just accept that these two words mean roughly the same thing and stop
trying to get one community's niggling technical distinction to be accepted as
gospel?

~~~
hartror
I've seen this distinction made in many "communities" including Ruby, Python
and Perl.

Perhaps in your domain/language this distinction isn't important but it is for
some of us. The patterns you use to work with concurrent vs parallel code
differ, which means it is an important distinction to understand.

~~~
mkehrt
My claim isn't that this distinction isn't useful, it's that the terminology
is a disaster, and we should find better terminology to make this distinction.
There's no consistency in using these particular terms to refer to these
particular meanings.

------
zak_mc_kracken
Every time this article or a similar subject is brought up, the thread quickly
fills up with people all giving different definitions of concurrency and
parallelism.

Nobody can agree on them so let's just assume they mean the same thing and
move to more interesting discussions, shall we?

~~~
curryhoward
> let's just assume they mean the same thing and move to more interesting
> discussions

They don't mean the same thing, and that is a poisonous attitude to have.
Parallelism is about performance. Concurrency is about synchronization. The
difference might not matter to you, but for certain things (programming
language design, library design, system architecture) it is paramount.

One thing nearly everyone agrees on: a single-core processor cannot do
parallelism, but it's perfectly capable of concurrency. How do you reconcile
that with your view that they are the same?

~~~
zak_mc_kracken
> The difference might not matter to you

The difference matters very much to me, what is silly is that nobody agrees on
which one is concurrency and which one is parallelism. In this thread alone, I
found two people with a different definition from yours.

------
pointernil
Several cars on a highway moving in one direction. On a one lane highway
several cars can move forward, one after the other __concurrently __.

On a two lane highway on each lane several cars can move forward one after the
other concurrently AND on the other lane other cars can move in __parallel
__to them.

___

One CPU (core) having one program counter pc executes one instruction after an
other and keeps track of the position in the instruction current by increasing
the pc. With special code it is possible to make the pc jump to an completely
unrelated code block and later make it jump back to the first code block. The
core sees and executes one current of instructions but the code-blocks, the
intentions of the code-blocks can be quite unrelated. There can be several
concurrent code-blocks managed (usually by the OS, Kernel) interlaced into the
instruction current the CPU core executes.

If there are several cpu execution cores several instruction currents with
their own interlaced code-blocks can be processed in parallel.

___

Hey! At least I tried ;)

------
kazinator
This insistence on terminology is sadly incorrect, and should be:
_multitasking_ isn't _parallel processing_.

 _Concurrency_ and _parallelism_ are very general, vague terms that mean
approximately the same thing: processing of some unspecified kind happening
simultaneously.

~~~
kazinator
For instance, parallelism can be in a single thread: "instruction-level
parallelism" (pipelines in a CPU: fetching some instructions while others are
being decoded and others still are executing on multiple functional units).

------
taeric
I don't get the difficulty here. Contextual synonyms are similar. And
different. Running concurrently with someone else is not the same as running
parallel with them. Though, to go parallel with them, you likely were running
concurrently. At least in most contexts.

------
snarfy
One is a property of a problem, the other, a property of a solution.

