The example under "Communicating by sharing memory" isn't correct, despite the author claiming that "it works". It's a very common example in concurrency 101 (updating a value). The fact that the author claims that it's correct is pretty concerning to me.
Adding a print(len(ints)) at the bottom of the function:
$ go run test.go
5
$ go run test.go
8
More on-topic, channels have their own tradeoffs. I often reach for WaitGroups and mutexes instead of channels, because things can get complicated fast when you're routing data around with channels...more complicated than sharing memory. I don't think it's good advice to broadly recommend one over the other--understand their tradeoffs and use the right tool for the job at hand.
> More on-topic, channels have their own tradeoffs. I often reach for WaitGroups and mutexes instead of channels, because things can get complicated fast when you're routing data around with channels...more complicated than sharing memory. I don't think it's good advice to broadly recommend one over the other--understand their tradeoffs and use the right tool for the job at hand.
Unfortunately, some Go people reach for the "this is not idiomatic"-cudgel far too quickly, instead of actually looking at the various trade-offs.
This share by communicating mantra needs to die. Channel based code in go has a tendency to require nontrivial cleanup. Each time you need to compose a channel you you end up introducing another goroutine (for example just to map a type or merge multiple channels). Now that goroutine needs to be closed, then you end up with an additional close channel, and sometimes you need to drain channels to prevent locks.
They are not universally a bad thing, but total avoidance of sync package primitives is a bad idea (and much slower in many circumstances)
For simple cases mutexes and such can be much faster and simpler. Channel based constructs are good when they model the problem well as streaming or queuing.
> Now that goroutine needs to be closed, then you end up with an additional close channel
Goroutines that just map values or similar will usually share a context with some other goroutine and can therefore reuse that context's close channel.
Yeah, but you often still have to write the code that reacts to the context. The way context gets talked about in the Go community, it gives many programmers the impression that a context closing will somehow forcibly shut down a goroutine or something. There's a lot of "when the context is cancelled, it will...", but it really ought to be phrased more like "when the context is cancelled, you can...", something to the effect of "have code that catches that close and handles it properly".
So, having a context is still "end[ing] up with an additional close channel" as silasdavis said. Even if you pass it to something that will be cancelled like a network operation, you still must correctly notice and handle the resulting timeout error.
The value for context is almost entirely that everybody in the community has come to agree on it rather than its functionality per se. Which is the same as io.Reader, in that the utility isn't its functionality, which any number of languages can replicate, but the way everybody agrees to implement and use it through the entire ecosystem, which is where other languages have a lot more trouble. Nothing technically stops that from happening, but you often end up with islands of agreement in different major frameworks or library ecosystems instead of ecosystem-wide agreement. Context is everywhere now; it's actually been a few months since I encountered a place I wish that took it that didn't.
Hi, author of the post here. Which example doesn't work? I just pasted the communicate by sharing memory example into the go playground: https://play.golang.org/p/bWtyGTC-EsC and it gives the same length every time. Am I missing something or are you referring to a different example?
> More on-topic, channels have their own tradeoffs. I often reach for WaitGroups and mutexes instead of channels, because things can get complicated fast when you're routing data around with channels
You're absolutely right. I certainly didn't intend to give a blanket recommendation. It's more of a, "If you're sharing memory, might it become clearer if you share memory by communicating?" I was worried that the simplistic examples would not properly represent the cases I was thinking of. I think that's a communication error on me.
In a way this supports the argument for the "Do not communicate by sharing memory; instead, share memory by communicating" mantra, as in a language with no compile-time checks for incorrect use of shared memory, it is very easy to get it wrong.
I have. I didn't have time to rewrite it (I'm working, and I didn't want to take it down) so I added a few caveats. I am going to research this more and follow up again. Thanks again to everyone for the great responses. I learned a lot from the comments here.
Welcome! It's the same for other data structures, by the way (not just slices) — maps are not safe for concurrent writes either. (The rationale seems to be that users of the data types can choose whether to make them safe for concurrent use or not depending on the use case.)
> Map is like a Go map[interface{}]interface{} but is safe for concurrent use by multiple goroutines without additional locking or coordination. Loads, stores, and deletes run in amortized constant time.
> The rationale seems to be that users of the data types can choose whether to make them safe for concurrent use or not depending on the use case.
Also that for most uses a concurrent map is way overkill, and a thread-safe one is both costly and basically useless (hence the Java folks not keeping the thread-safety when migrating from Hashtable to HashMap).
On the other hand they're kinda shit given how awful non-builtin data structures are in Go, and how easy it is to "leak" maps between goroutines.
Results are cached, but the main reason the example “works” is because the playground has GOMAXPROCS set to 1, meaning only a single goroutine will be running at any given point.
Ran the example on my machine and can confirm it's broken.
Be careful relying on the results of the Go playground; it has a bunch of differences and probably has GOMAXPROCS set to 1, which most other systems will not.
You need a sync.Mutex or similar protecting your call to append :)
The Go playground is designed to be deterministic, right? Not sure that’s a useful test. On mobile so I can’t compile right now or else I’d take a look.
As an aside, I recommend run.Group [1] as a replacement for WaitGroup. It implements a very common pattern where you have N goroutines that should execute as one unit (if one fails, everyone else should abort) and allows you to the final error. Similar to ErrGroup, but better.
I wish WaitGroups weren't so awkward syntax wise... "Here is an easy way to keep track of a bunch of Async tasks you launch as a group! Just don't forget to increment this counter each time you launch one or it won't work!"
What are the performance characteristics of channels? I do some work with computer graphics, and a lot of the concurrency concerns involve operating on large chunks of memory (textures, vertex buffers etc) and my mental model is that channels involve a lot of copying, so you would pay a heavy cost for sending these heavy objects back and fourth. But I have no idea if my mental model is correct.
You'd pass pointers to the buffers over the channels, so there wouldn't be much copying.
You would need to be careful that once one goroutine has sent a pointer to a channel, it doesn't touch that buffer again (until the receiver has finished with it, at least), otherwise you can get data races. You're implementing ownership semantics here, and unlike in some languages i could name, you're doing it without help from the type system.
No, GP was taking about a clear race condition on growing the slice in the first example. It is not just a case of "idiomatic go" vs "idiomatic" or something, as the author suggested, a problem only when the code grows. It is a critical bug in the first example.
Edit: add what one gets when run with `go run -race`
> WARNING: DATA RACE
> Read at 0x00c0000a6000 by goroutine 8:
> runtime.growslice()
Adding a print(len(ints)) at the bottom of the function:
More on-topic, channels have their own tradeoffs. I often reach for WaitGroups and mutexes instead of channels, because things can get complicated fast when you're routing data around with channels...more complicated than sharing memory. I don't think it's good advice to broadly recommend one over the other--understand their tradeoffs and use the right tool for the job at hand.