
Fearless Concurrency: Clojure, Rust, Pony, Erlang and Dart - pplonski86
https://sites.google.com/a/athaydes.com/renato-athaydes/posts/fearlessconcurrencyhowclojurerustponyerlanganddartletyouachievethat
======
tombert
I'm a bit surprised that there didn't appear to be any mention of Clojure's
built-in concurrency support outside of basic immutability. core.async gives a
nice channel-based system, agents give an actor-ish system, and STM/atoms let
you mutate the variable safely, without having to manually work with locks.

This is definitely a good high-level article, just something I was surprised
by, since core.async is what drove me (and several other people I know) to
start using Clojure.

EDIT: Just a note that I know core.async isn't built in, that was a mistake.
It _is_ a first-party library, however.

~~~
stingraycharles
It’s also worth noting that almost nobody uses agents or STM (except for some
highly specific use cases, but I’ve never seen them in years), and core.async
is a library, not a part of Clojure (which is a good thing, because it
promotes choice and keeps the language small).

~~~
jwr
I did use agents in the past, but stopped since core.async became available:
it addresses the use cases for agents in a much more flexible way. Also, I
found that handling errors in agents is difficult.

As for the general use case, atoms and core.async are great tools, and I
haven't needed to use refs (aka the STM) for years.

~~~
tombert
Don't the core.async go-blocks have issues if you need to block on IO heavy
stuff, as in the thread-pool will be blocked until the side-effects are done?
Agents don't have that problem, at least I don't think they do.

~~~
stingraycharles
Yeah this is true, typically you want to keep blocking functions outside this.
By default, core.async allocates as many threads as cores in your CPU.

In addition to this, I personally find core.async to deal poorly with flow
control (e.g. slow subscribers slowing down the entire flow for every
subscriber), and seems to not have a lot of telemetry / it’s difficult to find
out what’s going on in the thread pool behind it.

I’ve personally settled on Zach Tellman’s Manifold library, which provides a
much more sane abstraction and is fully compatible with core.async.

For some addition thought and material on these topics, I highly recommend
watching this talk by Zach, “Everything Will Flow”
[https://youtu.be/1bNOO3xxMc0](https://youtu.be/1bNOO3xxMc0)

~~~
jwr
Hmm, interesting — but manifold is clj-only, and I use core.async both on
server side and client-side in ClojureScript.

------
antirez
What's great about the actor model is that you can kinda apply it in languages
not having such native support. Even if it will not be a strict model, with
some education from the programmer it can do wonders and can be backed by lock
free queues. What I see as a problem is always they for a thread to wait on
new items a syscall must be executed which is costly. But one could use a
spinlock a few cycles and later degradate to syscall.

~~~
heavenlyhash
I found it interesting that the article mentions actors much at all for the
same reason.

It's pretty trivial to construct actors out of some other message passing
system. The interesting design choices in doing so are mostly down to the
semantics in A) message passing, and B) scheduling/triggering/mapping-onto-
green-threads/etc.

And for part A, all the other choices in the first three segments of the
article are still effectively the choices that you've got to contend with:
copying vs immutablity (bonus, COW mode, but still) vs ownership semantics.

Or to come at it from the other way: actors are far _more_ featureful than is
appropriate to directly compare to mere message passing semantics choices,
because actors generally have some concept of error handling as a result of
their relationship to scheduling, and that puts them on a whole different
field.

~~~
macintux
That’s a big reason I’m a fan of Erlang: individually, its features are
interesting, but collectively they form an amazing system. Greater than the
sum of its parts.

~~~
dmix
Agreed, it's one thing to have good primitives like Go channels but
Erlang/Elixir provide an entire system from which to build a concurrent/async
application. Things like error handling, messaging, storage, and structuring
your application well for concurrency are already basically built-in into the
standard approach you take building Erlang/Elixir apps.

~~~
lugg
Whats a tldr for how Erlang handles error handling?

This is something that is usually glossed over / afterthought when it is kind
of a big deal in real life programming.

~~~
macintux
The language is built upon a few core concepts:

* Processes are extremely lightweight, orders of magnitude smaller than operating system processes or JVM threads.

* Exceptions should generally not be handled; instead, those lightweight processes are allowed to fail, and an external supervisor process will re-launch if appropriate.

* Assertions about the state of the data are effectively enabled on every line of code, so the processes crash early, before corrupting other parts of the system.

* Data is immutable, which plays a role in making those assertions happen.

You could probably strip one or two of those bullet points and/or add a couple
of more, but I think that captures the highlights.

I'll toot my own horn. If you find that interesting, this is my favorite talk
I've given about the above:
[https://youtu.be/E18shi1qIHU](https://youtu.be/E18shi1qIHU)

------
ahaferburg
In case the author sees this: The indentation of the code samples is all over
the place, making it quite hard to read. Even more so for a white-space
sensitive language like Pony. Did you mix tabs and spaces?

~~~
rubiquity
Pony is not white space sensitive.

~~~
timClicks
Readers are though..

------
Lorkki
The Rust examples seemed a little bit oddly structured to me; the case of
collecting a value from a single thread would be just as easily solved by
returning from the closure and receiving it through join():
[https://gist.github.com/lorkki/e7386df8fff186b3e473994e9d31b...](https://gist.github.com/lorkki/e7386df8fff186b3e473994e9d31bee0#file-
thread-example-1-rs)

The power of channels could be illustrated better by adapting the next example
to show how you can avoid shared state completely:
[https://gist.github.com/lorkki/e7386df8fff186b3e473994e9d31b...](https://gist.github.com/lorkki/e7386df8fff186b3e473994e9d31bee0#file-
thread-example-2-rs)

It's also worth noting that a more versatile channel implementation than mpsc
can be found in the crossbeam crate, in case you're thinking of using them for
anything more serious.

------
nickpsecurity
And Eiffel Scoop which is always forgotten:

[https://en.wikipedia.org/wiki/SCOOP_(software)](https://en.wikipedia.org/wiki/SCOOP_\(software\))

[https://www.eiffel.org/doc-
file/solutions/eth-46802-01.pdf](https://www.eiffel.org/doc-
file/solutions/eth-46802-01.pdf)

It was ported to Java in a student project. That means other languages without
great concurrency could probably use it with macros or a preprocessor.

------
smallstepforman
I was really excited to discover Pony a couple of years ago, sadly there is
negative momentum with this project. So much potential, yet the world isn’t
ready for it yet.

~~~
tempodox
Or Pony is not ready for the world. I mean, they don't even have arithmetic
operator precedences [1]. If you're doing things concurrently it must be
something uber-important, like censoring cat pictures, but apparently not
arithmetics.

[1]
[https://github.com/aksh98/Pony_Documentation#precedence](https://github.com/aksh98/Pony_Documentation#precedence)

~~~
coldtea
> _Or Pony is not ready for the world. I mean, they don 't even have
> arithmetic operator precedences_

Many languages don't have arithmetic operator precedence (Smalltalk, Lisp,
etc) and that's fine.

It's not some feature that's missing, it's a design choice.

~~~
tempodox
In Lisp and Smalltalk it's obvious enough how composite expressions are
evaluated, and they don't have the usual infix syntax to begin with. Pony does
and the different precedences are just a bad surprise. As far as that's a
design decision, I'd call it bad design.

~~~
coldtea
> _In Lisp and Smalltalk it 's obvious enough how composite expressions are
> evaluated, and they don't have the usual infix syntax to begin with._

That's Lisp. In Smalltalk they do (have the usual infix syntax).

~~~
kazinator
> _have the usual infix syntax_

Isn't it the case that all operators in Smalltalk have the same precedence and
left-to-right associativity? That's not "the usual" infix syntax.

~~~
coldtea
Usual referring to the mere "infix syntax" part (as opposed to some other kind
of syntax that isn't infix).

Not to having "infix syntax plus the associativity / precedence you get in C".
We've already established that it doesn't have the usual precedence.

The distinction was with e.g. Lisp which has a prefix syntax (polish
notation), and other more exotic styles.

------
daotoad
It's too bad Perl6's interesting approaches to concurrency aren't mentioned
here. Supporting concurrency and parallelism were key points in the design of
Perl6, and in the multi-paradigmatic way of Perl, it provides a variety of
tools.

Jonathan Worthington can write and speak about this topic far better than I
can so I refer you to his talk and slides:

1\. [http://www.jnthn.net/papers/2018-conc-
par-8-ways.pdf](http://www.jnthn.net/papers/2018-conc-par-8-ways.pdf) 2.C
[https://www.youtube.com/watch?v=l2fSbOPeSQs](https://www.youtube.com/watch?v=l2fSbOPeSQs)

~~~
rurban
But these explicitly don't mention the two best approaches he chose to ignore
and even kill.

First the parrot threading model, which provided lockless safe threadpools,
and second the pony threading model which provides the same on top of
supporting shared refs, and forbids blocking IO. Which makes it even faster.

His talk ends with his simple approach taken being the best. Which is not only
wrong but also a lie, because he was part in killing off the parrot threading
model.

~~~
snapdangle
Reini, get off your crazy horse man! No one in the world, including the author
of the Parrot threading model, thinks it is the "best" threading model.

------
li4ick
For those of you who want to test multiple concurrency models, I can highly
recommend Seven Concurrency Models in Seven Weeks from Pragmatic.

------
duneroadrunner
I'll just mention that if you're using C++, the SaferCPlusPlus library[1]
supports a data race safe subset of C++, vaguely analogous to Rust's.

[1] shameless plug:
[https://github.com/duneroadrunner/SaferCPlusPlus#multithread...](https://github.com/duneroadrunner/SaferCPlusPlus#multithreading)

------
tirumaraiselvan
Sorry, I do not know any of these languages but I have used Haskell's STM
which is incredible. Any of the languages mentioned here close to the STM idea
(basically using strong types to ensure mutations happen in isolated context)?

~~~
nickik
The idea of STM has nothing todo with strong types. You can have STM with
types or without.

Clojure has a full STM since version before version 1.0. Simple example:

(def stm (refs {}))

(alter! stm assoc :testkey "testvalue")

(println @stm)

See: [https://clojure.org/reference/refs](https://clojure.org/reference/refs)

~~~
londt8
Important part of STM is that the retry mechanism requires functions to be
pure - which haskell compiler will check for you.

~~~
yogthos
Technically, STM requires that you're not updating your state via side
effects. That's a different proposition from requiring your functions to be
pure. For example, you could have a function with a print statement inside
your STM transaction just fine.

~~~
jerf
If you want to go way down that rabbit hole, it's an active topic of
conversation in the Haskell community right now how to break up IO into more
granular pieces than what the IO type natively provides, where a function is
either "pure" or it's a dirty rotten effects producer, and no middle ground in
between.

Although, even a stray print inside an STM transaction could actually do you a
lot of damage, since "print" is actually a fairly expensive operation. I've
written all sorts of programs in my life that were technically bottlenecked
not on reading the input, writing the output, or any of the processing in-
between, but on the printing it was doing. And, relatedly, every community
that tries to write a really blazingly fast web server in their language runs
into a bit of a wall around the mere act of _logging_ the hits. (Even just the
date computation starts to bottleneck things, but the writing does too.)

~~~
yogthos
That's the difference between Clojure and Haskell mindsets in a nutshell.
Clojure approach is to have sane defaults and guide the programmer towards
doing the right thing, but ultimately letting them do what they need to.
Whether it makes sense to do something or not is context dependent in
practice. Ultimately, the person writing the code understands their situation
best, and the language shouldn't get in the way of them doing what they need
to.

You could of course argue that by preventing the user from doing certain
things you avoid some classes of errors. However, I will in turn argue that by
forcing the user to write code for the benefit of the type checker often
results in convoluted solutions that are hard to understand and maintain. So,
you just end up trading one set of problems for another.

~~~
leshow
> You could of course argue that by preventing the user from doing certain
> things you avoid some classes of errors. However, I will in turn argue that
> by forcing the user to write code for the benefit of the type checker often
> results in convoluted solutions that are hard to understand and maintain.
> So, you just end up trading one set of problems for another.

Funny, this is exactly the opposite of my take from a type system like
Haskell's. The thing is, whether it's enforced by types or not, the same
invariants exist in your code. The only difference is that in one case they
are checked explicitly at compile time, and the other case they are hidden and
can blow up your programs.

If anything, explicit invariants make code _easier_ to maintain and
understand.

~~~
yogthos
Except that they're not the same. Static typing restricts you to a set of
statements that can be verified by the type checker. This is a subset of all
valid statements, otherwise you could just type check code in any language at
compile time.

Static typing makes many dynamic patterns either difficult or impossible to
use. For example, Ring middleware becomes pretty much impossible in a static
language [https://github.com/ring-clojure/ring/wiki/Middleware-
Pattern...](https://github.com/ring-clojure/ring/wiki/Middleware-Patterns)

The pattern here is that the request and response are expressed as maps. The
request map is passed through a set of middleware functions, and each one can
modify this map in some way.

A dynamic language makes it possible to write middleware libraries that know
absolutely nothing about each other, and compose seamlessly. A static language
would require you to provide a full description of every possible permutation
of the request map, and every library would have to conform to it. This
creates coupling because any time a library needs to create a new key that
only it cares about, the global spec needs to be modified to support it.

My experience is that immutability plays a far bigger role than types in
addressing the problem of maintainability. Immutability as the default makes
it natural to structure applications using independent components. This
indirectly helps with the problem of tracking types in large applications as
well. You don't need to track types across your entire application, and you're
able to do local reasoning within the scope of each component. Meanwhile, you
make bigger components by composing smaller ones together, and you only need
to know the types at the level of composition which is the public API for the
components.

------
mhd
Speaking of concurrency, is there a current environment that does something
similar to Linda? I always thought that sounded quite promising, but
apparently that generally is a death sentence (I also liked Modula-3, the Palm
Pre and Tcl).

~~~
timClicks
I think that creating a Linda engine is a chicken/egg problem. If there are no
applications that are using the paradigm, then there's no need for a highly
performing engine.

------
rainygold
You can also use Actors and elements of the OTP from Erlang in Clojure via
certain libraries.

------
lelf
> _Its use of dynamic typing makes me a little bit hesitant to use it, though,
> as I really love the help provided by static typing._

There is dialyzer, a static type (and not only) analysis tool for Erlang. It
is part of Erlang/OTP (i. e. built-in.)

[http://erlang.org/doc/apps/dialyzer/users_guide.html](http://erlang.org/doc/apps/dialyzer/users_guide.html)

~~~
NickM
Yeah as a long-time Erlang dev I would strongly recommend Dialyzer to anyone
trying to build a serious Erlang project. It doesn't give you quite the same
level of rigor that a strong typing system would, but it is nice to be able to
spec out types on an as-needed basis while letting it infer types in other
parts of the code. It kind of ends up feeling like a nice middle ground
between static and dynamic typing, in my experience.

------
eeZah7Ux
"The problem is, that sometimes, unfortunately, these tools are just not
sufficient, it's still easy to shoot your own foot and get lost in a sea of
complexity."

As a non-native English speaker I found this use of commas very difficult to
understand. Often I get the feeling that native speakers don't even notice it.

~~~
seaish
The first comma is simply incorrect. "unfortunately" does need commas around
it, but it could be moved to the beginning for a simpler sentence.

The fourth comma is also incorrect and should be replaced by a semicolon.

"Unfortunately, the problem is that sometimes these tools are just not
sufficient; it's still easy to shoot your own foot and get lost in a sea of
complexity."

And looking at that now, I would also get rid of "the problem is that".
"Unfortunately" and "sometimes" back to back lacks flow, so I would also
replace "sometimes" (in this case with "often" in place of "just"). In the
original, "just not" has a stronger grouping than "not sufficient", which is
why "insufficient" wasn't used. Now with "just" gone, I'd swap back in
"insufficient".

"Unfortunately, these tools are often insufficient; it's still easy to shoot
your own foot and get lost in a sea of complexity."

~~~
bshimmin
Just to aid people who are interesting in looking this sort of thing up, the
fourth comma in the original is an example of a "comma splice", where the
comma is too weak to join the two independent clauses. You can solve a comma
splice by either making the two independent clauses into separate sentences,
joining them with a stronger piece of punctuation (like a semicolon, colon, or
a dash), or by using a conjunction like "and".

~~~
DFHippie
British English seems to be more forgiving about comma splices than American
English. One certainly sees them much more frequently in British English.

~~~
bshimmin
I suspect it may be more the case that many people under 40 in the UK weren't
really taught much at all in the way of grammar, and even people who have
involuntarily picked up fairly decent grammatical skills still struggle with
comma splices, which are, admittedly, quite a subtle sort of error whose
detection is predicated by a full mastery of the purposes of the various
punctuation marks _and_ an understanding of clauses.

------
johnisgood
I am surprised it does not mention Ada. I really feel like that Ada is not
getting enough attention, it definitely deserves more. Even if you have heard
of the language before, please do check out some of the resources available,
you will not regret it!

It has been supporting multiprocessor, multi-core, and multithreaded
architectures for as long as it has been around. It has language constructs to
make it really easy to develop, say, embedded parallel and real-time programs.
It is such a breeze. I admit I am not quite sure what they are referring to by
fearless, but if it means that they can handle concurrent programming safely
and efficiently in a language, well, then Ada definitely has it.

Ada is successful in the domain of mission-critical software, which involves
air traffic control systems, avionics, medical devices, railways, rockets,
satellites, and so on.

Ada is one of the few programming languages to provide high-level operations
to control and manipulate the registers and interrupts of input and output
devices.

Ada has concurrency types, and its type system integrates concurrency
(threading, parallelism) neatly. Protected types for data-based
synchronization, and task types for concurrency. They of course can be unified
through the use of interface inheritance, and so on.

If you are interested in building such programs, I recommend two books:

[https://www.amazon.com/Building-Parallel-Embedded-Real-
Time-...](https://www.amazon.com/Building-Parallel-Embedded-Real-Time-
Applications/dp/0521197163)

[https://www.amazon.com/Concurrent-Real-Time-Programming-
Alan...](https://www.amazon.com/Concurrent-Real-Time-Programming-Alan-
Burns/dp/0521866979)

...other good resources:

[https://en.wikibooks.org/wiki/Ada_Style_Guide/Concurrency](https://en.wikibooks.org/wiki/Ada_Style_Guide/Concurrency)

[https://www.adacore.com/uploads/books/pdf/AdaCore-Tech-
Cyber...](https://www.adacore.com/uploads/books/pdf/AdaCore-Tech-Cyber-
Security-web.pdf)

The last PDF will summarize in what ways Ada is awesome for:

\- contract-based programming (with static analysis tools (formal
verification, etc.))

\- object-oriented programming

\- concurrent programming

\- systems programming

\- real-time programming

\- developing high-integrity systems

and a lot more. It also gives you a proper introduction to the language's
features.

~~~
weberc2
We learned some Ada in school. I really liked it, but the free toolchain was
poor (hard to get working properly, unintuitive) and the community stubbornly
defended its various idiosyncrasies like its verbose Pascal syntax and its
homegrown project file format. Most importantly, it just didn’t heave much of
an open source ecosystem, and the community was pretty hostile and defensive
toward newbies. But yeah, the language was neat! :)

~~~
johnisgood
When did this happen? Have you checked its current state? It has been growing
ever since. There are dozens of tools available today for free, and it is very
easy to set up.

I agree that its open source ecosystem needs to grow, but for that we do need
more Ada programmers! :P

By the way, I am really sorry if you experienced hostility from the community.
May I ask where it took place? I had similar experiences with a variety of
communities, even Rust. I try to not be demotivated from the language itself,
after all, it is not particularly the language's fault, and there are people
like that everywhere. They need to learn that what is obvious to them is not
necessarily obvious to other people, and asking is a sign of wanting to learn,
which I believe is a good thing. :)

~~~
weberc2
Hmm, circa 2012. I've checked in on it a handful of times in the ensuing
years, but I came across Go in 2014 and it ended up suiting my needs almost
perfectly (rapid application development, simple, great performance, mostly
safe, fantastic tooling/ecosystem, zero-runtime-dependencies, etc).

As for where the hostility took place, it was most Ada proponents who would
pop up in /r/programming, here on HN, etc. I'm sure the circumstances select
for the most toxic folks from any community, but it seemed especially potent
from Ada folks (could have been bad luck, ymmv and all that).

Would love for Ada to modernize and improve tooling/ecosystem, but between Go
and Rust, I'm afraid that the advantages for a modern Ada might be marginal.
It seems unfortunate for Ada that it didn't modernize prior to 2012; it could
have eaten both Go and Rust's lunch before they even existed.

------
platz
> it seems quite a lot easier to manage concurrency in Clojure from my biased
> point-of-view.

Assuming you know that none of the code in your concurrent blocks are is side-
effecting, which may be hard to know if it's not your code; Haskell checks
this for you, so to me it seems easier in haskell, not harder, because I don't
have to hunt down and read all the source code to know if what I'm doing has
concurrency bugs or not.

------
kccqzy
I'd also like to mention cooperative concurrency. I know the author kind of
skipped over this from the first sentence of the article where he defined a
concurrent program as having more than one thread of execution. But with
coroutines, we can have a single threaded-application that handles multiple
tasks concurrently too. In fact I believe this is much easier to reason about
for the programmer: the kernel won't randomly decide this thread's time slice
is up in the middle of an important data operation; the programmer knows
exactly when an operation may cause the current task to be "descheduled"
because those are usually explicitly tagged with the "await" keyword. In my
experience, this model eliminates the vast majority (but not all) of the use
cases for locks and other synchronization primitives. It is also very
performant.

Take a look at Python's asyncio, which has been in the standard library for
years, and you'll see how easy it is to write concurrent networking code
without threads and with very very few uses of locks.

~~~
lugg
Did you just say single threaded concurrently reasoned applications are more
performant than multithreaded applications?

Trying to reason about how this could be possible. I'm still amazed by the
performance you can get out of async non blocking style application code so
don't take this the wrong way, truly trying to understand here.

I'm also a little confused about threads being rescheduled by the OS being
treated like an inconvenience, to me it's a feature that prevents one
application being too hoggish.

I can see your point that it's nice to have these labelled with await but I'm
struggling to pinpoint a time I've asked myself what thread scheduling might
mean for my code. Other than just assuming that another thread could be
anywhere doing anything which I think is the correct approach no?

Locks btw, totally non issue with the right API/Lang support.

~~~
kccqzy
No I didn't say "single threaded concurrently reasoned applications are more
performant than multithreaded applications" because I was highlighting this
style of concurrency from the programmer's perspective. But yes, for certain
types of applications it can be more performant.

Naturally, if your application is computationally intensive, a single thread
can't compete with a multithreaded application. But for applications that use
a lot of slow, blocking I/O, converting them to a single-threaded application
that uses non-blocking I/O is a significant reduction in overhead. Compare a
traditional web server that uses one thread per request and blocking I/O,
versus one that uses an event loop, non-blocking I/O. You will see why the
latter is more efficiently on system resources. Again this isn't a panacea,
and for some applications you do have to use threads. I'm just pointing out an
omission of the article.

~~~
lugg
Thanks for the clarification I see your angle now.

I'm not trying to debate on this, pretty sure we're just reasoning about facts
we both agree on here.

I was aiming for a more general stance but if were talking about web servers
there is one thing we might disagree on being the net benefit of non blocking
in that specific scenario.

Let me see if I can explain, the individual requests may contain a lot of slow
blocking io but these are handled async with a thread per request at the
server level. So the request and internal blocking is limited to the request
only (ignoring thread pool limits.)

While it may be that non blocking might produce more performant request level
handling in some cases most or at least a substantial amount of web request
logic is very dependent on chaining those blocking io one after the other,
e.g. get thing modify, return.

In blocking you execute and reason about sequentially in non blocking you
reason about with callbacks or syntactic sugar around promises specifically
because sequential programming is easier to reason about.

The point of difference being that in non blocking you have a lot of overhead
in the event loop and in the implementation, all to produce sequentially
executing code that doesn't block other requests.

This is sort of why me and a few colleagues came to the conclusion PHP is
still pretty damn decent for web stuff.

Non blocking is great for concurrency where it actually results in parallelism
but if it doesn't then there isn't much gain other than gained complexity.

All this needs to be taken with a grain of salt. Node has Async await for a
reason, and I've seen a few different implementations of non blocking php as
well.

The choice really comes down to which style you find your self benefiting from
on the regular.

Bleh that needs about three rounds of editing before it makes sense, something
I don't have time for. Sorry for the ramble!

------
punnerud
What about picking jobs from an SQL-table as writing back with a check? I
naver had any of the problems specified on a multi-core/process system.. or
have I just been lucky or blessed by ignorance?

~~~
pornel
That's an example of shared-nothing message-passing system, which is a
perfectly valid solution. It's implementable in all the aforementioned
languages.

------
fogetti
I am not sure but I think Go channels were inspired by CSP (communicating
sequential processes) which is not inherently unsafe though.

~~~
leshow
from the article:

> Notice that this article does not include Go, a language that admittedly has
> an elegant concurrency solution as well (Go channels) because that solution
> is _not actually thread-safe_ \- it's not very hard to have race conditions
> in Go or corrupt state because Go does not enforce a separation of shareable
> and not-shareable mutable state.

~~~
jerf
My vote for Go's #1 mistake isn't the popular "lacking generics", but missing
the opportunity to have fearless concurrency, and making it so that goroutines
can only communicate via immutable shared values and copied values. If you
want to optionally and explicitly penetrate that barrier sometimes, I'd be OK
with that, but this should be the default.

Generics will probably be added after the fact 10 years after the 1.0 release.
They may not be quite as slick as something that was in there from the
beginning, but based on the many other languages that added them after the
fact, it'll probably work out well enough. Fearless concurrency can't be added
to a language; if it's not in there from the beginning it'll never be added.
It's a change so big it's almost automatically a new language. And, as can be
logically deduced from that statement, I am aware that it would have some
additional effects in the language, such as introducing immutability, it
wouldn't be simply what we now know as "Go" with just a minor tweak. I'm
comfortable with that. Even with Go's focus on simplicity, I do think there
was a slight error towards being a bit simpler than the problem permits here.

(I do pretty well with Go's concurrency, but I was first trained brutally by
Erlang and Haskell. I see those who don't come by the same route have more
trouble.)

------
aimatt
Man the indentation in this article is confusing me.

------
crimsonalucard
technically node achieves it as well.

------
falcolas
I have a, perhaps unjustified, concern at the loss of fear around things like
concurrency. People keep substituting appeasing the compiler with actual
thinking around _hard_ problems. But until the halting problem is solved at a
compiler level (at a level that can account for runtime cases), you should
always have a healthy amount of fear when writing concurrent code.

Fear, and respect, the absurdly difficult challenge that is writing correct
concurrent code, even when your compiler is helping you out.

~~~
kibwen
I find this line of thinking to be a holdover from an earlier age. One could
replace the word "concurrency" in the above comment with "memory safety" to
express the popular sentiment as of the 1980s--but in the decades hence the
vast, vast majority of programmers have come to be able to completely ignore
concerns related to the careful management of allocating and deallocating
memory, and though we can argue that the result is a proliferation of memory-
hungry Electron-like apps, on balance it's been a dramatic victory for letting
people focus on solving the problem at hand rather than distract them with
tiresome pointer-juggling.

It's true that in the 90s it was important to cultivate a healthy fear of
concurrency in the same way that parents in Cambodia must sadly teach their
children to fear landmines. However, there's nothing inherent in the problem
space that dictates that the concerns of one's ancestors must be the concerns
of their descendants. One day we hope the landmines in Cambodia will be
cleared, just as we hope the landmines in concurrent programming will be, and
I'll be thankful when that day comes.

~~~
falcolas
How do you solve for deadlocks at the compiler level? Even if all your memory
access is perfectly safe, you can still deadlock on external resources if you
aren't pay attention.

That's what I mean by fear and respect for concurrent programming. That's the
problem that hasn't been solved.

~~~
zzzcpan
Deadlocks is a solved problem. Technically, they can't even exist in any
concurrency model that doesn't share anything. What can exist is processes
waiting for messages from each other, but that's not a deadlock, but a valid
behavior and is only potentially problematic without timeouts. Asynchronous
message passing with event-driven/reactive semantics farther enforce
impossibility to block on waiting for a specific message. In practice strict
event-driven semantics are not necessary for it to never be a problem.

~~~
gpderetta
Deadlocks are not restricted to shared memory communication. Two Unix
processes talking via a socket pair can trivially deadlock ( for example if
they are both blocked waiting for the other side to speak first).

Also asynchronous systems can deadlock as well, it is just much harder to
debug as the debugger won't show an obvious system thread blocked on some
system call; the deadlocked threads of execution still exist but will have
been subject to CPS and hidden from view (just some callback waiting forever
on some wait queue).

~~~
zzzcpan
It's not useful to use the same term for very different kinds of things.
Shared resource deadlocks are common and disastrous problems. Share nothing
mutual blocking is uncommon, not necessarily a problem at all and can be
completely harmless and automatically recovered when it is a problem. For
example, spawning actors to wait without timeouts would be absolutely ok,
parent can do all the timeouting and kill the children.

Two processes blocking on a socket is not a deadlock. Surely there are
timeouts on both sides, because using sockets without timeouts is just
ignorance, and both will just timeout and move on.

Also strictly statically declared event handlers per actor are 100% mutual
blocking and deadlock free. Because they can't wait for messages in a way,
that blocks other event handlers.

~~~
gpderetta
The term deadlock has been used for message passing issues since the dawn of
time. It is literally the same issue.

Using timeouts to paper over issues is just wrong. I accept that timeouts are
necessary to deal with network issues (and a timeout should cause the
connection to be dropped, so won't solve the deadlock issue), but certainly
they are not required for in-application message passing.

Finally if an actor won't send a message untill it has received another one, I
fail to see how statically declared handlers will help.

~~~
zzzcpan
> Finally if an actor won't send a message untill it has received another one,
> I fail to see how statically declared handlers will help.

Think of it as reacting to messages, not waiting. In that model actors of
course can react by sending messages, but can't have a special waiting state
for specific messages, making it impossible to block other handlers. I'm not
sure why this is hard to understand.

We do have this problem solved in every possible way. But it's so not a big
deal with actor model, that there is no point sacrificing any flexibility for
it.

~~~
gpderetta
Forget about waiting. Think about state machines. Let's say that that there is
a rule that, if the machine is on state S1, on reception of message M, send
message M and move to state S2. This the only rule for state S1. Now if two
actors implementating this state machine and exchanging messages find
themselves in state S1 at the same time, they are stuck. This is a bug in the
state machine specification of course and I would call it a deadlock. How
would you call it? How would the actor model statically prevent you from
implementing such a state transition rule?

Edit: BTW, not sure why you got downvoted.

~~~
zzzcpan
This is why I'm talking about specific model with static handlers per actor,
where you can't choose handlers dynamically depending on the state you are in.
Whether you are on state S1 or S2, all handlers are still able to receive
messages, what they can't do is run at the same time.

~~~
gpderetta
It can receive all messages you want, but if the only message that it that
would cause a state transition, and send out a message, is M, then it is still
stuck.

I mean, I'm no expert, but I guess you could statically analize the state
machine and figure out, given a set of communicating actors, which sequence of
messages would lead to a global state from which there is no progress. I
assume that, because message ordering is not deterministic the analysis is
probably non easy to do.

~~~
zzzcpan
Well, this is the limit all models have. You can abuse memory safety the same
way and use indices of bounds checkable arrays as raw pointers for example.

------
pizlonator
Whether or not this is "fearless" depends on your point of view.

As a long-time pthreads programmer, these systems feel like skittish
concurrency. It's for folks who are too afraid to use the underlying
concurrency primitives, like shared memory, synchronization of some kind
provided by OS, threads.

~~~
felixgallo
Exactly. Because the recorded history shows those constructs are nearly
impossible to use correctly.

~~~
Animats
Yes. Having spent too much time debugging code by people who thought they were
clever enough not to need language safety, I want language-level protection.

(Did the Python crowd ever fix the race condition in CPickle?[1] They were in
denial about this about eight years ago when I reported it. Doing multiple
CPickle operations in separate threads can crash CPython. If you search for
"CPickle thread crash" you find many reports of hard to reproduce problems in
that area.)

[1] [https://bugs.python.org/issue23655](https://bugs.python.org/issue23655)

~~~
int_19h
I don't see a denial in that bug report - they're just saying that they don't
have an actionable repro, and don't know how to get one.

