Hacker News new | comments | ask | show | jobs | submit login
Go concurrency patterns (golang.org)
142 points by joubert on Jan 17, 2013 | hide | past | web | favorite | 81 comments

How do goroutines get cleaned up? For example, in many of these examples they create a goroutine that is supposed to send a message over a blocking channel eventually. What if I use their "select + time.After()" pattern and the timeout hits and I return. Does the goroutine hang forever? Does it constitute a memory leak? We both hold the channel, so it can't get garbage collected, is Go smart enough to know there aren't any more readers and the goroutine + channel can get cleaned up?

Maybe to solve this I use a "quit" channel as they do. Do I need to worry about what happens if a quit message is never sent, or if two routines send one, or if the receiver is multiplexed and only one of many readers gets the message and cleans up correctly. It sounds like the whole malloc() and free() dance all over again, except this time I don't have any nice invariants to reason about such as "Everything that gets malloc()ed needs to be free()d, exactly once". Instead I need to worry that everyone is playing nice and will terminate when asked, and I need to remember to ask exactly once.

I'm not sure this is exactly what you're asking, but returning from a goroutine ends it. The rest is just garbage collected as needed.

If you choose to select indefinitely, yes, you will need another channel to signal when you should return, or the goroutine will continue to run, and continue to hold a reference to that first channel.

It doesn't even require "selecting indefinitely." Every time you create a goroutine you need to worry about how it terminates, or you get a memory leak. If it uses channels, you need to make damn sure you perform all the proper operations on it, consume all the values from it eventually, etc. It's like manual memory management, except that the invariants are all implicit and poorly specified, like "Take at least 10 values from this channel, and I will terminate" or "Send me a value on this other channel over here and I will terminate". Sounds like a nightmare to reason about in some cases, which is a shame because Go does so much else Just Right^TM

A concrete example, slide 34 [1] contains this code:

    func main() {
        c := boring("Joe")
        for {
            select {
            case s := <-c:
            case <-time.After(1 * time.Second):
                fmt.Println("You're too slow.")
This is code that looks really simple, but it's only correct because the process is known to terminate. You couldn't, for example, have the body of this function as a subroutine somewhere: every time you execute it a boring() goroutine is going to be left hanging, which is a classic memory leak. What's worse, "c" looks like an average variable I don't need to worry about (yay garbage collection!) but it's not because it escapes to a running goroutine.

Correct me if I'm wrong but even if boring() was a much simpler function that returned after generating one value, if the timeout happened first it would be left hanging. In that case the only thing that would make the program correct is that of two racing goroutines, one is expected to return first. That's the kind of thing that keeps me up at night, especially when Go makes it so easy to treat goroutines and channels like every other GC-managed data structure when in fact they are every bit as leaky and difficult as threads.

[1]: http://talks.golang.org/2012/concurrency.slide#34

Correct me if I'm wrong but even if boring() was a much simpler function that returned after generating one value, if the timeout happened first it would be left hanging.

Such is the case of the time.After() function, which easily solves the problem by returning a buffered channel. That would not work for an infinite generator of course.

This is true. You can reason about a lot of particular cases and design a correct program based on these behavioral invariants. The trouble is they tend to end up being these pervasive, whole-program invariants. For example, if you buffer exactly 1 value like this, then you need to make sure the channel only escapes as a read-only channel.

The power and orthogonality of Go's concurrency primitives is actually encouraging. I'm not picking holes in it because I think Go is a bad language. On the contrary, I just think that it's not the endgame of concurrent programming. Using channels is not a safe operation in the same way that passing around memory references in a GC-managed language is safe, or passing values in a type-safe language is safe. You can write a correct, concurrent program with pretty much any behavior you want in Go, but it's still a minefield of deadlocks and memory leaks. I think there's room for improvement there.

I'm learning Go to evaluate whether I want to try to use it at work. After using Erlang, Python's gevent, and Haskell, each quite substantially for concurrency (though to varying degrees, Erlang the most), I'm finding Go's concurrency constructs to be by far the most dangerous of the four. It's not even close. Go is the only one with synchronization on the message sending, and I'm trying to keep an open mind on the utility of that, but so far it mostly seems to have the effect of turning valid code in any other system into a deadlock in Go, for the purpose of avoiding putative problems in asynchronous sending that just don't seem to happen in practice in the other environments.

(By open mind, I mean that I acknowledge that I have years of being steeped in an asynchronous message passing environment and that I may need to unlearn some things and learn others. I'm a polyglot programmer, so this isn't my first time for that, either. There are some some useful things that Go's channel style is easier to work with than Erlang's style. Still, right now I'm cautiously negative on the net utility of that, even so.)

I've deadlocked Erlang twice in many years. It didn't make it to production either time. I deadlocked Go twice in the first two days, and several times since. (And Go terminates the entire process upon deadlock.) And what really concerns me in terms of recommending it for work is that while I instantly understood how I deadlocked Go, it was only because of the fact that I'm fairly experienced now in reasoning with these sorts of advanced concurrency constructs. Many of my coworkers would have been stuck for hours.

It's not out of the running yet, considering many of its positive qualities and the competition it is up against, but the Unambiguous Win I was hoping for is not emerging. (Many of these issues may be avoidable with style; for instance, one can write goroutines that manage resources in a fairly Erlang-y style and which are services that have a continuous receive-reply loop, and those are safe enough. But I'd really rather have more compiler support here rather than depending on convention; that's how we got to the realization that conventional threads are a bad idea in the first place.)

> I'm finding Go's concurrency constructs to be by far the most dangerous of the four.

That is exactly what I can't get over. I don't understand why this is the default. Most processes in the real world are not synchronous. Sending an asynchronous message makes more sense as a primitive (synchronous can just be modeled on top as a mini protocol, sender sends his own address). The other way is not as clear. (Spawn a separate concurrency primitive and send to the channel from there?).

The other thing that bothers me is the focus on channels rather than on processes. The two can be seen as equivalent but for some reason process-centric design seems more natural to me.

There is of course Scala with Akka but I don't see JVM a benefit, I see it as a nuisance (it is more of an irrational political thing probably come to think of it).

Though I'm sure you will, I would say give it some more time.

My start with go was just like yours, but it quickly started to "click", and more robust patterns began emerging more naturally for me. I don't think I can verbalize yet what has made the difference for me, but it may something like "structure your code around channels" vs "use channels to synchronize your code".

Yes, I'm not done. I am mostly just a bit disappointed that it still remains relatively easy to get into trouble with concurrency. It takes much more work to get in trouble with those other environments. It is a positive that it does scream bloody murder, rather than silently corrupting things or the other ancient failures of shared-state threading.

+1 for this!

When I first started with golang I too was flailing around with deadlocks, but once you have spent a bit of time with it, it becomes obvious when you have messed up.

I think that this is more of a lack of familiarity with the language rather than an intrinsic problem with golang.

This is extremely common and is like second nature to any Go programmer. This is true of any sort of concurrency.. If you have a for-loop in a Java thread, you're going to have some sort of terminating condition or way of signaling death to the thread.

There are many ways to deal with this, here is a simple one:

  ch := make(chan int)
  defer close(ch) //I can timeout later if I want
  go consumer(ch)
  //I timeout down here and return
  func consumer(input chan int) { //I will terminate when the input channel is closed
  	for x := range input {
  		fmt.Printf("I got %d!\n", x)

Very valid concerns I think, but it's not quite that bad. A few points:

- Goroutines are not garbage collected.

- In this case and many others, you can just use a buffered channel. But of course in many other cases it doesn't solve the problem.

- If you solve this with a "quit" channel but don't use it, obviously you have a problem

- Several goroutines can try to send to the same "quit" channel. Make it buffered and have every goroutine try to send.

- In the case of several listeners, you can just close the channel and all receivers will return.

- However if you have several senders and several receivers on the "quit" channel, then you have a problem: closing a closed channel will panic.

See also:



Rob Pike has a Q&A at the end of his video where he answers (what sounds like) the exact same question you ask: http://youtu.be/f6kdp27TYZs?t=46m17s

Essentially, it gets garbage collected without you having to worry about it unless you're doing something fairly special.

Except that Rob Pike later in the golang-nuts groups expand on exactly that question: https://groups.google.com/d/topic/golang-nuts/bfuwmhbZHYw/di...

Qoute: "Goroutines are not garbage-collected. They must return from the top-level function (or panic) to have all their resources recovered."

Nice find! Although, he doesn't really answer the question, which had to be asked more than once in that video. I'd love to see a talk going in-depth on this topic alone.

It took me a while to get what you were asking. Given (Syntax may be wonky):

    func b(chan x) { time.Sleep(60 seconds); x <- true }
    func a() {
       x := make(chan bool)
       t := timeoutChannelCruft(30 seconds) // don't remember the syntax
       go b(x)
       select {
           case <-x: return;
           case <-t: return;
What happens to goroutine b and channel x? Does goroutine b block for the remainder of the program? Does channel x ever get cleaned up?

I'd guess b() blocks forever so in your timeout case you could probably just do

    go func(){ <- x }() 
to make sure that the b() eventually returns and is cleaned up.

Right, but be careful, if you execute that in the <-x case you just created a totally different memory leak since there won't be any more values coming down that channel. The only reason the timeout channel itself is safe is because the channel is buffered. Which incidentally is a good way to handle this particular example: make a buffer of size 1, and then be sure to never ever send more than 1 value.

This is kind of my point: with Go's concurrency, all the contracts are implicit and dangerous, exactly the kind of thing that are supposed to be solved by a good type system and garbage collection (incidentally, Go gets all of that stuff right).

Video of rob giving the talk: www.youtube.com/watch?v=f6kdp27TYZs

Also, this makes use of the excellent "Golang Present Tool", which makes most of the code on the slides executable.[1]

[1]: http://godoc.org/code.google.com/p/go.talks/present

Completely off topic, but this is literally the worst presentation software I've ever seen and I hate it when it pops up. It didn't fit in my screen, so I zoomed out (or whatever the pinch movement does), at which point horizontal scrolling stopped working, even when I zoomed back in. IIRC it didn't work at all on the ipad.

Is it really so hard to have forward/back buttons on the slide somewhere?

Google presenters don't care about you unless you're running Chrome and have a keyboard.

Pro-tip: If you load it with JS disabled, all the slides show up as plain text. That might let you at least consume the slides' content.

EDIT: Pro-tip #2: It looks like if you tap the edge of the next/previous slide, it should scroll onto screen. Kind of tricky, though.

Here's a video of the talk from Google I/O 2012: http://www.youtube.com/watch?v=f6kdp27TYZs

> Rough analogy: writing to a file by name (process, Erlang) vs. writing to a file descriptor (channel, Go).

That analogy doesn't really work, messages can be sent to an Erlang process by pid, how this pid came to be obtained isn't necessarily through naming (you can also register a process to a name, in which case it is indeed like writing to a file by name), and some of the things messages are sent to are not actual processes (e.g. ports).

The way I see it is that you write to files with an fd, but and fd can point to more than just files.

> some of the things messages are sent to are not actual processes (e.g. ports).

Ah, fair enough. I don't know erlang. Can a process have multiple distinct channels it receives on?

Nope, each process has a single mailbox.

On the other hand, processes don't have to go through their mailbox sequentially, they can prioritize messages (through pattern matching) and handle these first even if they are the last message in the mailbox (it's called a "selective receive"), so you get the same feature trivially by tagging messages instead of sending a message to a different channel.

Slide #33: Fan-in using select

I always get alternating Joe/Ann responses. I'm not seeing that in the code though. It says selectors are chosen pseudo-randomly. I'm expecting Joe or Ann to get a couple quickies in at some point.

I seen the RNG get seeded in later examples. This may just be an unseeded RNG and an unfortunate sequence that gives an illusion of an order.

If you added a trivial wait of random length before each response, mightn't you see non-alternating behavior at times?

Not GP, but I also saw this.. Increasing the timeout to 0><10 seconds does give the expected output..

But shouldn't I see this from time-to-time with 1 second as well? Hmm..

My rand.Intn() was always returning the same value. Seem like you have to to seed it manually (once, in main() for example): rand.Seed( time.Now().UTC().UnixNano())


Edit: It's in the documentation, but you have to look for it.

"Seed uses the provided seed value to initialize the generator to a deterministic state. If Seed is not called, the generator behaves as if seeded by Seed(1)"

Ok, I've never actually written anything in go so humour me here.

The generally given reasons on HN that threads mapped 1:1 to pthreads are bad seem to be as follows:

Shared mutable state is hard.

Memory usage is inefficient when you are allocating a fixed size stack per thread and those threads spend most of their time blocked waiting for IO.

Let's assume you are using some thread pooling pattern , so thread startup time is not such an issue.

Apart from perhaps better syntax with channels etc, how does go fundamentally solve these problems in way that you cannot with standard threads?

For example, shared mutable state problems can be mitigated to a degree by going down a "shared nothing" approach and handling shared state through some middleman like a Queue or a SQL/NoSQL DB.

I assume go supports some form of pass by reference so you can still make shared mutable state an issue with it if you pass a reference to a goroutine.

Each goroutine has it's own stack regardless of how many pthreads exist so there can still be wasted memory on a blocking operation.

Nothing stops you from doing anything you want with threads, including using them to implement very safe patterns. The problems are, 1: the thread libraries do not afford those safe patterns, and indeed, afford very unsafe ones (and also ones that compose poorly) 2: libraries written for the ecosystem will end up using the poor patterns 3: lack of compiler enforcement and how easy it is to accidentally mutate something unexpectedly mean that unless you are superhumanly careful you will still screw something up, somewhere.

And you are correct that Go does not enforce shared-nothing between the goroutines. It has better affordances on those fronts that conventional C, but it is not enforced as it is in Erlang or Haskell. And while I say the affordances are "better", I still think that people screwing it up will be a practical problem.

Yeah, this is exactly right... I have to say Go changed my thinking about concurrency. But now I want to write a very small wrapper around pthreads that lets you write in the actor style. It just adds those "affordances".

Go is more or less the actors style, except with the (discouraged) possibility of sharing mutable state... even though for some reason it doesn't seem to be advertised as such.

The reason is that I don't think Go can cover what C + Python can. C gives you more low level control and Python is still shorter (and thus quicker). I like Go a lot but I would rather program in C + Python (like I do now) than C + Python + Go.

And then the other component to this is de-emphasizing the somewhat-horrible-for-concurrency Python/C API. The library I'm talking about would have channels, and you would have one end open in Python, and one end open in C. Python and C are running in different threads. Rather than the crazy subroutine/callback hell you have now with any nontrivial Python/C binding.

So basically I want to fix the C/Python interface, which is the only reason it is awkward to program in C + Python (the languages themselves are both great), rather than adding another language that overlaps highly with both of them.

The OS is written in C, so you've never going to get past C. If there was a whole world written in Go, that might be reasonable... but I don't believe in portability layers.

What does the OS being written in C matter? libc has to make a syscall to access OS functions just like anything else does. There's no obligation on a language implementation to make calls through C. It's a thin enough layer that if your language runtime is otherwise written in C you may as well, but that's a design decision.

It would be perfectly reasonable to write a non-C language runtime targeting Linux against syscall instructions or against Windows' documented system DLL interfaces.

Even if it didn't there's no reason to ever add a C dependency to your system if one of the languages you're already using has sufficient "systems" versatility. Go is clearly intended to fill this role.

tl;dr: you can get past C just fine even on an OS written in C and the C dependency isn't free.

I find this pretty interesting. It is basically using Erlang as an OS on top of Xen. http://erlangonxen.org/

Haskell had a similar project but I can't remember what it was called.

Well, when you make a syscall, there's C on the other side. You can make a syscall without C, but it's not portable to different architectures. I think that is essentially the reason that every language runtime I know of uses C -- it's the only choice to portably access operating system services. If there is a counterexample that would be interesting to know, but I don't think it exists.

Probably Forth could be thought of an alternative non-C stack, and of course it's used on machines without an OS. But that's how far afield you have to go to get away from C.

My point is that when you're using Go your stack is still Go and C, to an extent. Usually you won't need to peek under the hood, but at some point when developing nontrivial services, you always need to. Basically my philosophy is that you should always understand at least 1 abstraction level below what you program in. With either Go or Python, that's C.

Channels pretty much are queues. You can do all of this in something like Java with thread pools and BlockingQueues (or Jetlang or Akka), but it's going to get really messy really quick. With Go you can wait on multiple channels until any one of them has an item. That's not possible in Java so you end up with one thread per queue even if you don't need them; plus all the synchronization headaches that come with that. There's probably some Java library that lets you do what I'm talking about, but it's not going to be as nice as doing it in a language that has these concepts built right in. Plus, only recently (in Java 7, I think) will the runtime actually use epoll (and I don't think there's kqueue support at all, so you're out of luck on OS X), so no matter what you do you're going to have one thread per network connection.

I am not sure how to read your question, go with goroutines does nothing that you can't do in C with p-threads because you could probably write something that would take go and output C.

What go does is make some things easier, goroutines are multiplexed onto OS threads in a way that means you don't need to worry about pooling or scheduling. Channels provide an easy way of sending and receiving data in either a blocking or non-blocking manner

In go things are generally passed by value as a default, if you want to pass by reference you just pass a pointer.

Go solves a lot of general concurrency problems in a first class, language based approached rather than having you implement the finer details for each task that requires it.

You can solve the same problems by manually using threads, message queues, etc. but it would be much easier if the tools and language already had generalized solutions to put into practice.

How do goroutines compare to things like futures in eg. java?

The ultimate structure seems much the same: start off a worker, and then you wait for the result with a timeout. As an example, the following code gives the same result:

In Go:

    c := make(chan Result)
    go func() { c <- Web(query) } ()
    go func() { c <- Image(query) } ()
    go func() { c <- Video(query) } ()

    for i := 0; i < 3; i++ {
        result := <-c
        results = append(results, result)
In Java:

  ExecutorService executorService = Executors.newCachedThreadPool();
  Future[] futures = new Future[]{
   executorService.submit(new Web(query)),
   executorService.submit(new Image(query)),
   executorService.submit(new Video(query)),
  for (int i = 0; i < 3; i++) {
   Result result = futures[i].get();

Goroutines are extremely lightweight. It is common to run hundreds of thousands of them on one machine. (The main reason for that is goroutines start off with a very small stack and extend it as necessary)

It is up to executorService to decide how many threads to use to execute those tasks. But they will be executed in threads. So you are unlikely to have more than a few hundred(maybe a few thousand?) of them around at any time.

AFAIK you would need a CompletionService[1] to get the results in a completion-order-agnostic order.

[1]: http://docs.oracle.com/javase/7/docs/api/java/util/concurren...

Find an example that shows using a timeout channel along with another async activity on another channel, and then switching on that. Then you will start to see some elegance versus the Java way. (Sorry, in a hurry otherwise I'd provide the link myself :(

Please do show how to do this in Go when you have a few minutes, I'm really curious (and happy to write it in Java once I see what you mean).

I think they mean something like:

    c = make(chan bool)
    go doSomeWork(c)
    select {
    case b := <- c:
        # do something here if something happened on the channel
    case <- time.After(5 * time.Second):
        # timed out :-(

Well the timeout can be handled in java too

  Result result = future.get(5, TimeUnit.Second);
  if (result == null)
     // do something if timed out
     // do something with result
My original question was phrased badly though, and what I meant to ask was: is the underlying implementation faster? Since Golang is designed as a systems language, is the channel/gorouting system a lot more performant than the Executor/future method of Java?

EDIT: I see what newobj is talking about now:

With the Java method, the 'task queue' in this case is very rigid and would be hard to add futures from multiple locations, while the goroutine method allows easy access to add new messages to the channel from anywhere.

You could probably emulate the goroutine method using a synchronized queue or similar, but the goroutine version handles it automatically.

FYI - you can also have 100,000 goroutines on a standard desktop without hitting resource problems (the go authors mentioned debugging a system in production that had 1.3 million). I doubt the same could be said for futures.

Right, just a nit pick:t if get() times out, it throws a TimeoutException instead of returning null (much cleaner).

Dont know either very well, so this isn't an answer to your question..

But looking at your code; c := make(chan Result) is several orders of magnitude nicer than ExecutorService executorService = Executors.newCachedThreadPool(); =)

Off topic: Why are their slides so unusable on iPad!? I can't swipe to the next slide without skipping that and 3 others.

The 'present' tool is open source, you can improve it if you have some JavaScript skills: http://code.google.com/p/go/source/browse/?repo=talks#hg%2Fp...

Here's the touch event handling code: http://code.google.com/p/go/source/browse/present/static/sli...

Yes, someone, please do this. I am the author of present, but I just pulled slides.js from another slide deck software.

It's on my list to try to fix this, but I have little experience with touch devices. I'm sure someone with experience writing UI code for touch devices could make quick work of this.

They're annoying in a regular desktop browser, too.

I don't know why people feel they need to break regular scrolling for their slides. I'm sure it's interesting content, but I'll wait for a PDF version of the slides.

Tell me when the PDF version has embedded, runnable code snippets.

> Tell me when the PDF version has embedded, runnable code snippets.

When you're making the basic task of seeing the slides impossible, nobody cares about your pointless turd-polishing.

Furthermore, PDF readers (at least Adobe's, but third-parties also sometimes do even if not all APIs are present) embed an ECMAScript interpreter, so you can indeed have embedded runnable code snippets in PDF: http://www.adobe.com/devnet/acrobat/javascript.html

> When you're making the basic task of seeing the slides impossible, nobody cares about your pointless turd-polishing.

These are slides for the presenter. Nobody cares if you see them or not. Only the presenter has to care.

> PDF readers [...] embed an ECMAScript interpreter

So you wrote a Go to Javascript compiler? Can the VM running in the PDF reader implement a web server that can interact with the outside world?

> These are slides for the presenter. [...] Only the presenter has to care.

Irrelevant, the whole thread of inquiry is about seeing the slides locally, you're two comments too late to try that one: you implicitly acknowledged the desire and desirability of personally viewing the slides in your previous comment.

> So you wrote a Go to Javascript compiler?

Also irrelevant (although there likely is no need to, there's an LLVM Go frontend and there's emscripten), the exact same method used here can be used there.

> Can the VM running in the PDF reader implement a web server that can interact with the outside world?

That doesn't even remotely make sense.

I think you are getting a bit too aggressive about some slides with poor UI. You are asking a lot of the slide authors to minor facilitate secondary usage at the expense of primary usage.

It could be a lot worse, they could be like 99% of the other slides in the industry: completely worthless without listening to the accompanying lecture.

> You are asking a lot of the slide authors to minor facilitate secondary usage at the expense of primary usage.

I'm in a desktop browser, and I find this incredibly annoying to navigate.

If that's not "primary usage" for viewing slides posted online, I don't know what is.

Primary considerations are centered around the presenter himself. He is the primary user of the slides, so whichever system he is most comfortable with will be the obvious choice.

Asking that the presenter dedicate less time to preparation and more time to constructing some sort of elaborate LLVM/emscripten PDF presentation just for the benefit of secondary users who won't/can't use arrow keys is fairly unreasonable.

> If that's not "primary usage" for viewing slides posted online, I don't know what is.

Seriously? The primary usage is presentation..........

> I think you are getting a bit too aggressive

That's a joke right?

> You are asking

I'm not really asking anything, I merely replied to "4ad"'s insulting dismissal of concerns voiced by two other people as to the usability of the deck, then replied to his dishonesty and goalpost-moving antics.

> minor facilitate secondary usage

Making it actually possible to see the deck you officially posted online is neither facilitation or minor.

> at the expense of primary usage.

"Primary usage" is clicking a "run" button? Really?

> It could be a lot worse

That is a troubling qualifier, how is inaccessible information better than accessible worthless information?

The information is not inaccessible, just annoying for some to access.

Making it actually possible to see the deck you officially posted online is neither facilitation or minor.

Thankfully, it is "actually possible" to see the deck.

It doesn't matter if they have runnable code if I can't get to it.

Of course is does.. Most other people can get to it

The phrase "concurrency patterns" tells me we are not yet talking about the concurrency-oriented language of the future. This is in the sense that patterns usually deal with the weak spots of the language, and lo, we must build weird constructs like doing fanout (not relevant to the original problem.) Go isn't alone here, in fact I see it every time someone thinks: "Erlang is nice, but gee, PIDs are so darn crude."

In my mind, a language that makes concurrency the real priority (and I don't mean to say Go, Rust, et al. shouldn't be tempered by other concerns) will not require intermediate patterns to map the problem to the code. In other words, something close to a 1:1 correspondence should exist between problem concurrency and language support, including the nature of communications.

I'd like to draw a comparison to memory management in C. It's a far simpler problem, but it's easy to illustrate that we don't (normally) need "allocation patterns" that go beyond the problem of memory needs. You can sum it up: "If you need memory, allocate it. Free it when you are done." The rules are simple and the advanced memory allocation techniques aren't required to map to the problem.

> patterns usually deal with the weak spots of the language

Design patterns, as the term is usually understood, do. Using the word "pattern" to refer to organisational structures that arise in practice is useful and distinct than preaching design patterns like some people do for C++.

"You don't need to be an expert! Much nicer than dealing with the minutiae of parallelism (threads, semaphores, locks, barriers, etc.)"

Great! But then, a few slides later:

"Go has 'sync' and 'sync/atomic' packages that provides mutexes, condition variables, etc."

: (

Can program written in Go still deadlock or not? I became a big of Clojure's no-brainer "no locks at all, hence no deadlock" approach btw.

Go has mutable variables by default, and channels can send pointers or other mutable values, such that two goroutines may be trying to modify them at once. Thus, mutexes can still be useful.

This may go back to the fact Go is trying to be a systems language, so while it's not C(++) in terms of controlling memory layout and resources, it still has some constructs designed to give more control over the system than something like Erlang or Haskell does (at least in the "default" language subset).

It is possible and indeed easy to do unsafe things in Go. It is not like Erlang or Haskell, in that you must still consider things like what goroutine "owns" a value without any particular language support or enforcement for that decision.

Go just makes things easy to get right, it doesn't make you get them right. Mutexes, condition vars, atomic operations on primitives have their place. That place is usually in scenarios where channels do not fit well.

A program written in Go can deadlock, but the runtime detects deadlock and terminates the program, so at a system level, it does not deadlock.

P.S. "No locks at all, hence no deadlock", is this not true, any system with blocking operations can deadlock. You can construct a Go program that only uses channels and get it to deadlock...

Yes, you can still deadlock, but the runtime will usually panic saying that no goroutines can advance, which is definitely nicer than just hanging. If the core of your code is synchronized on channels, it becomes very easy to manage.

This is indeed a nice feature, but it's worth pointing out that if any goroutine can make progress, the deadlock will be missed.

For example, if you launched a task to do some work in the background, and the task deadlocks with one of its own goroutines or even itself (e.g. reads and writes on the same channel), this deadlock will not be reported.

In a large system with many goroutines, I think it is more likely to have a deadlock within a component instead of process-wide, and these will not be reported. (Yet!)

Yes, I should have emphasized "usually".

I also locked up a program by inadvertently creating a small goroutine with a busy-loop (the typo being time.Sleep(1)). That little loop prevented any other goroutines from getting scheduled, though it wasn't hard to deduce what was happening, since the CPU was running at 100%.

How does the runtime detect this? I'd have suspected that such a thing would be undecidable, but I'm not at all a concurrency expert.

The run-time checks if all goroutines are asleep and waiting for input. If true it panics and halts the program.

It can't detect live-lock or thrashing, if you are still waking up other goroutines but not doing anything useful it won't prevent that.

I assume every OS thread the go runtime is utilising is looking for an unblocked goroutine to execute and can't find any unblocked ones.

It is in the general case, but most cases are decidable, so it's still a useful feature.

The packages are available but you don't need to use them and are often encouraged not to.

Yes, you can still have all goroutines asleep and the program will halt but that's due to improper concurrent design rather than a program implementation mistake.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact