
Ask HN: Good resource to explain concurrency vs. parallelism - sharmi
I have been looking for some resource that explains the concepts of concurrency and parallelism and differentiates the two.<p>The closest I have got to clarity is http:&#x2F;&#x2F;berb.github.io&#x2F;diploma-thesis&#x2F;community&#x2F;023_concurrency.html<p>I plan to read the whole thesis for a good understanding of concurrency and it&#x27;s different implementation models. Meanwhile, I am interested to know if any other resource has worked for you. I am most familiar with Python. So any resource that leans that way is nice to have though not necessary. If that is the wrong way to do it, pl advise me on the best way to approach this.
======
blackflame7000
Think of it this way: On a single core chip, you can have multiple threads
running concurrently, but only one thread will execute at a time. The
operating system context switches between each of these threads very quickly
to give the appearance of multiple threads executing at the same time. On a 4
core machine, you can have 4 threads executing concurrently and in parallel.

The key detail here is that when threads execute in parallel, each core of the
CPU will have its own independent cache and thus changes that occur on one
core may not become immediately available to the thread executing on another
core. As a result things like race conditions become magnified on multi-core
or hyperthreaded CPUs.

~~~
sharmi
Thanks you blackflame, gives some good clarity.

So can we say concurrency is a superset of parallelism?

~~~
blackflame7000
Yea that is a fair assumption to make. Threads running concurrently are not
necessarily running in parallel, however all threads running in parallel are
also running concurrently.

Again the point the book is trying to make by distinguishing the difference
between these two is that when threads are truly running in parallel, they are
operating on their own private copy of the data in their cache which is then
written back to main memory. Without any synchronization on the order in which
cores write to main memory (Remember the compiler/cpu are allowed to re-order
instructions) it is possible that one core can accidentally undo the changes
another core just made. This is called a race condition and is mitigated by
the use of mutual exclusion zones, aka mutexes, where only one core may
operate at at time.

Early multi-core CPUs had relatively small caches which made the likelihood of
accidentally stepping on another cores changes less. However today, modern i7s
for example have relatively large caches and can exacerbate race conditions.
Anecdotally, I once experienced this problem on code that I wrote that passed
through all the testing without any problems only to crash when run on a newer
core i7 cpu during deployment. I had to explain why this was happening which
lead me to learn all about cpu caches and whatnot.

