
Concurrency Glossary - slikts
https://slikts.github.io/concurrency-glossary/
======
Eli_P
I tried to make something similar, but more visual, specifically for Python
(draft)[1].

Composability is what matters here, when you start to combine these par/async
things into big programs, it can become a mess to reason about.

[1]
[https://github.com/ptytb/pyroboros/blob/master/Pyroboros.pdf](https://github.com/ptytb/pyroboros/blob/master/Pyroboros.pdf)

~~~
slikts
That looks nice, but Python is an odd choice for studying synchronization
primitives considering it has the GIL.

~~~
Eli_P
Thank you, especially for mentioning GIL, I should have added it there too.

------
shmerl
Nice resource! It explains the difference between parallelism and concurrency
quite clearly.

Some things can be improved though:

 _> The difference between green and lightweight threads is that green threads
aren't necessarily lightweight; an example of green threads is classic Java
Virtual Machine threads, while Go goroutines are an example of lightweight
threads._

Saying that the difference between green and lightweight is that green aren't
necessarily lightweight isn't helpful. Some explanation on what they are and
when they differ would be useful.

~~~
slikts
Thanks, and yeah, the part about lightweight threads stands to be expanded.
Other missing parts are reactive vs interactive, linearizable vs serializable
and push vs pull.

~~~
eptcyka
The main difference between a green and a non-green thread is that the
scheduling of said thread is done in userspace. Also, JVM threads are not
green since the early 2000s, there's a 1:1 mapping between Java threads and
actual kernel threads, and as far as I'm aware, said java threads are never
migrated between different kernel threads.

~~~
slikts
You're right, I qualified it with "classic JVM threads", but a less ancient
example would be better; it's just not immediately clear which green thread
implementations are also relatively heavyweight like JVM green threads used to
be.

~~~
eptcyka
I believe the distinction between heavyweight and green threads here is not
really the correct comparison. It's more of a comparison between kernel
managed (non-green) threads and userspace managed (green) threads. One could
argue that Goroutines are green threads that are executed on non-green
threads. It's also not infeasible to imagine a system where kernel threads are
actually lightweight - single address-space unikernels come to mind.

~~~
slikts
I'll rework that section; thanks for the feedback.

------
omazurov
I found only one occurrence of the word _task_ in the document. The entire
paradigm of task parallelism is omitted.

The style is heavy. I consider myself skilled in the area but struggled to map
wording to my conceptual framework or experience. For a novice, most of the
text is probably totally impenetrable. As an example:

 _> Locking can have coarse- or fine-grained synchronization granularity,
where coarse-grained granularity trades concurrency for deterministic
execution while fine-grained granularity is more concurrent but less
predictable._

That's a property of the program (or the programmer), not locking as such.
Fine-grained locking is of course as reliable and predictable as coarse-
grained but when an operation involves multiple updates that must appear as
atomic (a transactional guarantee, f.e.), coarse-grained locking is
straightforward and fine-grained locking is more difficult to get right. There
is no way, however, the programmer should accept possibly incorrect updates on
the basis that now the program runs faster.

 _> Declarative concurrency models reduce the need to make trade-offs between
concurrency and correctness ..._

Now that's a huge statement. How can that trade-off be justified in the first
place (it can be!)? Do we really have a need and accept that trade-off with
fine-grained locking above?

I would not insist my interpretation is right but that's exactly my point.

~~~
slikts
The trade-offs of locks are accidental locking like deadlocks; message passing
removes this trade-off but is less flexible, and dataflow further constrains
the use of explicit synchronization. It's part of the larger argument for
declarative approaches enabling equational reasoning.

Task/data parallelism bear to be mentioned because they're common terms;
thanks for pointing it out.

------
lalwanivikas
Woah! 112 points and no comments. Has anybody actually read it? Or is everyone
upvoting in hopes that someone knowledgeable will comment about the quality of
the content?

~~~
zzzcpan
It's mostly unstructured set of unrelated concepts. I have no idea why people
would upvote that.

~~~
pimeys
I know most of these concepts and the valuable part was to fill the gaps what
I've missed. I would also like to read about work-stealing threadpools in a
similar manner.

I liked the explanations.

------
eptcyka
> The internal vs external distinction also maps to other pairs of concepts
> like synchronous vs asynchronous, pull vs push, direct vs inverted, opaque
> vs transparent and local vs remote or distributed. This sentence is
> superfluous, as none of the listed _pairs_ are mentioned anywhere else ever
> again, besides sync vs async.

~~~
slikts
It's a major recurring theme; for example, anything that refers to implicit or
explicit is part of it. The list you quoted is a placeholder to be elaborated.

------
ioquatix
I personally prefer to call this asynchronous computing broken into parallel
(execute at the same time) and concurrent (execution is interleaved) models.

~~~
slikts
Asynchronicity is just how sequential models are extended to support
concurrency. Parallelism and concurrency are orthogonal, so a concurrent model
can be executed in parallel.

~~~
ioquatix
> Asynchronicity is just how sequential models are extended to support
> concurrency.

Asynchronicity just means not existing or occurring at the same time, i.e. not
synchronised or without synchronisation.

How you expand that definition to computer science is a matter of opinion. I
personally take the opinion that an asynchronous operation could be either
concurrent or parallel.

> Parallelism and concurrency are orthogonal, so a concurrent model can be
> executed in parallel.

Completely agree with that.

------
hopranch
Dichotomous pairs! Sounds simple enough.

~~~
slikts
The wording is indeed pretentious, hopefully improved in future iterations.

------
fleetfox
Skaisti :)

