(1) "Contrary to the common belief that message passing is less error-prone, more blocking bugs in our studied Go applications are caused by wrong message passing than by wrong shared memory protection."
(2) "Shared memory synchronization operations are used more often than message passing, ..."
So basically, (1) people have more trouble writing correct multithreaded routines using message passing, and (2) in large production applications people tend to fall back to using shared-memory primitives rather than message-passing anyway.
It seems the intention of the Go language designers was to make a language that would be simple and easy for programmers to pick up. Given that most programmers are already accustomed to multithreaded programming using shared memory and that this is conceptually simpler, I think the language designers made a mistake by throwing channels, a relatively new and experimental approach (yes, I know CSP is from 1978, but I'm talking about widely-adopted industrial languages), into the language. I think it was the single biggest mistake in the design of Go.
The paper makes that claim for blocking bugs, but it also says ”Message passing causes less non-blocking bugs than shared memory synchronization”
Also, having more blocking bugs in message passing could be explained by users using message passing only for the harder cases. I don’t see that discussed in the paper, but ”Message passing […] was even used to fix bugs that are caused by wrong shared memory synchronization.” might hint at it.
Finally, programmers having less familiarity with message passing code might explain the difference in number of blocking bugs, rather than writing message passing code being more difficult.
Channels are definitely hard things to get right in go consistently, and they can be a bit non-intuitive. They result in deadlocks/stalls and leaked goroutines. However what they don't result in is memory corruption issues -- none of the cases wind up reading or writing the wrong values. So in that sense I think you can say that concurrency fails with channels tend to be "safer" in that they will not cause operations to proceed with the wrong data. Which is one advantage over many cases of shared-memory concurrency bugs.
I've been programming for 20 years, as a career, and I've never once used shared memory in the way that I think when I think "shared memory". Maybe some languages I've used do things internally with shared memory, I don't know.
My point here is to be aware that the people you work with and know aren't necessarily representative of "most programmers" even though it feels that way to each of us. We expose ourselves to what we know more than what we don't know, and that influences how we each see the world.
Definitely a good point and something we all should keep in mind.
With that being said:
I would say no. Outside of Silicon Valley, most programmers are employed working on internal enterprise applications using backend languages like Java, C#, Ruby, and Python. Shared memory is the standard way multithreaded programming is done is all these languages. It is also the standard way multithreaded programming is done in C and C++ and therefore most of the serious software out there. I think this definitely covers "most programmers," or at least most application developers, the subset of programmers most likely to try to learn Go.
Avoiding sharing memory for concurrent workers is rare, in my experience, unless the workers are completely isolated -- which they rarely are. You can move some things outside (for example, Redis as a cache), at the expense of performance or simplicity.
I think using channels for every-day synchronization is often reached for too early when concurrency likely isn't even needed in the first place, and when it is, it's probably going to work well enough with shared memory until it becomes more complex.
My point wasn't about using timers which present channels to the user, but rather implementing timers/tickers is largely done using channels (though, also runtime help to make it faster/more efficient) and it's those types of problems which channels are a good solution IMO.
There's a lot of features of C++ that would have been awesome, had it not been for those corner cases.
In many cases in Go they left footguns and open corner cases for the end coder in favor of an easier compiler implementation.
The point is that shared-nothing designs are just not how Go typically works. You can see that in functions like http.HandleFunc() (and everything else that uses the DefaultServeMux), which registers a global handler across all threads of the program.
I mean, Go doesn't force you to use shared memory; no language with threads does. It encourages it through library design (e.g. the default serve mux) and language design (e.g. package init() functions) though.
Does that lead to more bugs in programs? I don't know. It's quite possible it doesn't matter in terms of defect count. It's not shared nothing, though, is all I'm saying.
I suspect the fact that asynchronous messaging turns out to be particularly well-suited to network communications was a happy accident.
Anyway, I guess my point is that local-only actors can be useful, but I definitely agree that network transparency is a huge win.
There are (at least) a couple of different ways to identify the recipient of a message: process ID (unique identifier for another actor) or Erlang's name service.
The VM does indeed know whether the recipient is local; the sender typically neither knows nor cares, although the information is available if useful.
Apparently more easily, with a faster time to market, and with less errors than alternative languages. E.g:
But the idea is not that you as the end programmer "make optimal use of a computer's memory hierarchy".
It's that you write in a way that makes it easy for the runtime to ensure what you do is solid and correct and scalable -- and it's up for the runtime makers to make sure it makes optimal use of the computer's memory hierarchy.
And it's also that you sacrifice some of that "optimal use", to get the "solid and correct and scalable" part.
Other actor systems may vary.
And there exist many other concurrency-safe slower languages with message-passing only, which don't have the Go problems. Go has at least the tools to detect deadlocks at run-time in testing, but these tools do not replace proper static type-checks for concurrency bugs.
Go's message passing model is somewhat error prone. It lets you pass references. So you can create shared data through the channels. It's easy to do this accidentally by passing slices, which are references to the underlying array, across a channel.
More generally, Go channels are one-way. Since you often want an answer back, you have to create some two-way mechanism from the raw one-way channels. It seems to be common to do this at the raw channel level, rather than using some "object" like encapsulation. Creating a mechanism which shuts down cleanly is somewhat tricky, too.
On the other hand, statistical methods don't give quality it's due.
I'd rather fight a blocking bug than a data-updated-via-race-condition bug.
In node, var x = 5; is atomic.
In golang var x: Int = 5 is not atomic.
"Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously."
Do you enjoy trolling and dismissing Go in general? Reading your comments sounds like some envy toward Go popularity. When you're not complaining about concurrency features it's about Go GC.
It's Rob Pike's view  that concurrency enables parallelism "for free". I think that's oversimplified and at odds with the way people who write CPU-bound code actually use parallelism features, but that's how Go was designed: it omits traditional parallelism features in favor of concurrency features.
EDIT: Glanced at Wikipedia out of curiosity. Forgot it was used in ASCI Red. So, definitely can use them for parallel computing. To pass a teraflop, you just need 1,600 sq. ft. of them running at 850 kW. ;)
You could mimic parallel processing effects with multitasking tho.
Note however how parent didn't say that Go "can't do" parallelism, but that Go is not "a language designed for parallelism".
It's the granularity of the concurrency.
Most encounters I've had with "message passing"--whether it was AJAX, Win32 PostMessage, or grade school--have been non-blocking. Go channels block.
edit: Here's a pretty good write-up from 2016 on why channels are an anti-pattern:
Go tried CSP in the beginning but in CSP the 'sequential process' preempts only on blocking (IO) or until finished.
Also message passing is just that. You can't reach out and modify a message after you have posted it, IRL. This means messages are 'safe' to use concurrently since they are copies. That, in conjuction with strict CSP ssemantics (processes always run to completion with no preemptive cooperation) gets you 'safe concurrency' at the price of performance and at scale 'reasoning' about what is happening.
So, the practical decisions were (a) honors system message passing via references, (b) preemption at IO and elsewhere, and (c) introduction of mutex, etc. to fall back to [more performant] 'light weight threads' and 'locks' paradigm.
I would say though that go channels have not turned out to be as useful as I thought they would be. I dont find myself using them that much. But maybe it's just my style or the type of stuff I am writing.
No, async/await is typically stackless, while goroutines have stacks. That's a significant difference.
With something like pipes or sockets, there are timeouts and saner handling of breakages. With Go channels, one false move, and you've got a panic or an infinite wait.
I like Go, just not a big fan of channels.