

If concurrency is easy, you're either absurdly smart or you're kidding yourself - gdp
http://plsadventures.blogspot.com/2009/08/if-concurrency-is-easy-youre-either.html

======
ori_b
Funnily enough, Plan 9 uses concurrency (via CSP) as a way of structuring
programs to make them easier to write and reason about.

Concurrency isn't hard; Sharing is hard. This is the reason that mutable state
hurts understandability - things can change under you because the view of
state is shared. This is the reason global variables are bad - they are shared
between different parts of the program, and can be modified from places you
can't see. And this is the reason shared-memory concurrency is hard. Add to
shared-memory concurrency a dash of race conditions in operations involving a
load and a store, and things get hard.

Of course, I'm oversimplifying as I write this, but I think that's really the
gist of it.

~~~
cduan
This is why I generally prefer event-driven programming over multithread
programming. This is the sort of programming you do, for example, when you are
implementing a GUI: you write short event handler functions that respond to
events (such as mouse clicks or the completion of network or disk reads), and
then a master loop receives events and dispatches the handler functions.

You get the same benefits as concurrency (maximum processor usage, at least
when you only have one processor), but you avoid the pitfalls of shared
memory, race conditions, and unexpected state changes (because only one event
handler runs at a time). It does take a bit of getting used to (it seems so
easy to make function calls like time() or mkdir() that are actually blocking
and thus technically should be separated into a new callback function). But I
find that the benefits of processor efficiency plus the comfort of avoiding
concurrency problems are worthwhile.

<http://en.wikipedia.org/wiki/Event-driven_programming>

~~~
ori_b
The point that I was trying to make is that a well designed concurrent program
is often easier to reason about than an event driven program. Even if it
didn't have benefits scaling to multiple processors and hiding latency,
concurrent programming would have advantages in clearly structuring programs.

A great talk about the advantages of concurrent programming in producing clear
and understandable program structure can be seen in Rob Pike's newsqueak tech
talk (<http://video.google.com/videoplay?docid=810232012617965344>)

------
dkarl
It takes him all of five sentences to back off from his sensationalistic
headline and admit there is another possibility:

 _or the way they are using concurrency is restricted to nice simple patterns
of interaction to which we can apply some simple mental analogy_

Actually this possibility covers most programming requirements. I like to
think I'm superhumanly smart, and I would love to tackle a hard concurrency
problem, but so far I have only needed mundane engineering ingenuity to map my
problems to known solutions.

~~~
gdp
I see. The post was essentially a reaction against people going "concurrency
is easy! What are you talking about it being hard?!"

When people say "concurrency is easy", they mean "I use little bits of
concurrency that I understand well and I find those easy". That's not the same
as "concurrency is easy". That kind of arrogance is essentially like saying
"flying a jet fighter is easy, because I know how to use a seatbelt".

------
brg
In an ideal world, concurrency need not be a problem. After all, we can say
that it is simply a matter of checking the safety of every read/write against
every possible state of every thread that may be running concurrently.

However, the reality is that developers can not do this checking with complete
accuracy. The simplistic reason is that the complexity of concurrent programs
is exponentially greater than that of non-concurrent programs. In a concurrent
model if I have m threads of execution and n read/writes per thread I need to
verify n^m states in my code. Such a blowup in state makes any reasonable
person look for generalizations and optimizations, but those then lead
incorrect assumptions and ultimately defects.

Outside of this simplistic reasoning lie other problems which serve to
exacerbate code verification during design and construction; realities of
deadlines, access to source or documentation of components, the learning curve
of technologies used as black boxes.

------
noss
This is a whole lot of text to describe determinism and non-determinism.

~~~
gdp
Using "determinism" with respect to concurrency means about a billion
different things depending on who you talk to.

For example, is the process that results from parallel composition of two
deterministic processes deterministic or non-deterministic?

The answer is "it depends".

Concurrent "deterministic" processes are not significantly easier to reason
about (informally) than non-deterministic ones, whichever definition you
choose.

In fact, re-reading, I definitely don't see anything in there that could
described by either "determinism" or "non-determinism", so either your
definition differs wildly from mine, or you're choosing to be dismissive
deliberately :)

~~~
fhars
Unix pipes (deterministic concurrrency) are far easier to reson about than a
mutitheaded shared state program (nondeterministic concurrency). But then you
can of course argue that this is due to the fact that the shell with proper
unix filters form a monad with cat as the unit and the pipe operation | as
bind.

------
swannodette
Learn Clojure. Learn Haskell. Learn Erlang. Learn Mozart Oz. And you'll
suprised that you don't need to be a rocket scientist to write robust highly
concurrent code.

~~~
gdp
_sigh_

From the article:

 _Now, before you jump up and down and tell me that "insert-your-favourite-
concurrency-model-here addresses these problems", I know there are strategies
for managing the complexity of these tasks. I did concurrency theory for a
living, so I know some approaches are better than others. My point here was
essentially to illustrate why it is important to manage complexity when doing
concurrency. It's definitely not just a case of "you can't do concurrency so
that means you are stupid". I'm arguing that pretty much everyone is bad at
concurrency beyond some simple patterns of interaction that we cling to
because we can maintain a good mental model by using them._

To elaborate, things like the technologies you mentioned may provide better
mental models for concurrent development, but I'm not convinced that they have
solved concurrency in its entirety.

~~~
swannodette
I noted that from the article. But nowhere did you mention how these languages
fail to allow for complex concurrent modeling, which they do. You simply
discussed your viewpoint without context for how these languages do not
competently allow for massively concurrent programs- that's what they're
designed to do.

And I'm not saying they solve "concurrency in it's entirety". I'm not even
clear what you're trying to express with that statement.

~~~
gdp
Let me clarify: I'm a big fan of most of the languages you've described. I
think they do a really good job in the problem domains they are designed for.

My only point is that the domains they are designed for are constrained. I
don't mean this as a negative - I consider a general pthreads-style
implementation of concurrency to be constrained by the ability of people to
reason about that kind of concurrency, probably to a much larger degree than a
considerably more elegant implementation in a modern functional programming
language.

I would argue, however, that such languages simply make it easier to construct
concurrent programs that follow certain patterns. They provide useful
abstractions for managing the complexity of concurrent interactions, which I
think was my original point, and which I think is brilliant! What is missing
is something that provides good-quality abstractions, high-quality static
analysis and very strong safety guarantees _in the general case_.

Think about things like mobile processes, "ubiquitous computing" and software
with huge failure tolerances by virtue of being able to reconfigure and
coalesce resources at will.

And hopefully the flow-on effect of that would be to essentially stop horrible
concurrency implementations being used in languages like Java and friends.

~~~
swannodette
"I think they do a really good job in the problem domains they are designed
for."

The problem domain of concurrency?

"What is missing is something that provides good-quality abstractions, high-
quality static analysis and very strong safety guarantees in the general
case."

They have excellent abstractions and great safety guarantees. Not sure what
you mean by static analysis. A concurrent program is non-deterministic, what
is going to go under static analysis? I'm sorry if I'm misunderstanding
something here.

"software with huge failure tolerances by virtue of being able to reconfigure
and coalesce resources at will."

Wasn't this the whole point of Erlang? You know like telecom switching?

~~~
gdp
A very similar point was made in the blog comments, but I'll restate my
response here in brief:

Concurrency is hard even when you're just using a pencil and a piece of paper.
If you write down a description of a system using any kind of simple notation,
it will be difficult for me or anyone else to read that an have a good
intuition about the runtime behaviour of that system. This isn't a discussion
of statefulness or mutability. It's just about the presence of a very large
number of interactions that essentially defy any usual sense of causality.

------
kevingadd
"It's our total inability to mentally reason about concurrency that makes it
difficult"... hyperbole much? Ugh. Less ranting and more solutions, please? :)

~~~
gdp
Duly noted, thanks!

