Erlang doesn't have threads - it has processes. Thread's share state, processes don't. The processes are created within the Erlang cluster - they can't be operating system processes because they might be running on a different instance of the operating system.
Try and Catch are actually very rare constructs in Erlang - usually only at the boundaries of the system with the rest of the world. In my 10 years experience there is about 1 try/catch per 20k - 25k lines of production code.
Erlang doesn't handle errors - you let your process crash and let OTP restart it.
The article seriously underestimates the importance of OTP.
An Operating System is a set of libraries that means your unwritten application can do things like write to persistent storage, speak to a network, run in a cronjob, have a GUI, etc, etc...
To have a reliable computer system you need at least 2 computers and OTP is an Application System than runs across multiple physical boxes. This means that your unwritten software can failover if a box dies, recover in the presence of gross errors, cluster up, etc, etc... (Google App Engine is another example of an A/S).
The first cluster we ever deployed was a shambles. We moved a webserver from a London box to a Californian one and its performance went through the floor. Because of a typo we had actually only moved the web bit leaving the database running in London.
Not only that they also have separate heaps -- truly concurrent GC is possible without too much "clever" code.
I haven't used Erlang, but that sums up my experience with Go very well. Coming from Python, the hardest thing for me to adjust to was not the concurrency model (which is incredibly straightforward), but the error handling.
That said, the adjustment was worth the effort. I've come to dislike the idea of my functions failing (panic()nig) because of some nested function call four, five, six levels deep. Returning error codes makes the error handling explicit, and because the compiler complains about unused lvalues, it makes it easy to spot code where the errors are ignored (just look for underscores). Contrast to Python, where it's almost impossible to tell whether or not I need to wrap a given call in a try/except block without digging deeper into the code.
It makes writing concurrent code much more logical, because it couples the error handling more closely with the spot at which the error occurs, while at the same time giving the option to let errors 'bubble up' as needed. It's not quite the same way that Lisp handles conditions so, so elegantly, but it's about the closest I'd expect in a C-style language.
 Or you could not assign the single return value of a function that returns an error, but since pretty much everything returns an error (whether or not you pay attention to it),
'naked' function calls are just as suspicious.
Don't take anything he said about error handling in Erlang at face value, because very little applies to idiomatic erlang. Basically the only thing which is correct is:
> In Erlang, it is idiomatic to let your functions fail
And even that has to be stretched: in Erlang, it is idiomatic to let your process fail (throwing exceptions is rare, catching them is rarer still, the average Erlang program will likely do neither "procedurally"). Because idiomatic erlang separates processing an error recovery, and an other process will handle error recovery for the failing one. Furthermore he conflates error handling and failing, when Erlang very much separates them (error handling is done via return values, in a terser yet more explicit way than Go)
The rest is worse.
I'm leaning towards Clojure now (partially because I want to read SICP), but at the very least a comparison between Erlang and Clojure would be very interesting (not just limited to concurrency).
Edit: I have thought about Go lang, too, but Rich Hickey's talks about functional programming (especially immutability) have won me over.
As a language Scala supports both functional and imperative programming. I recommend you check it out too.
The original Scala Actors did this as well, with predictable results. Scala runs on the JVM, which is a monolithic process. The usage of selective receive meant that over time, actors would accumulate enough unhandled messages to result in a possible OutOfMemoryError, from which there is no recovery on the JVM. Akka does not do selective receive, so it does not entirely follow OTP.
That said, it is inspired by OTP, as evidenced by the original name of the project, Scala OTP. It's just optimized for the JVM.
Go is noticably more fragile with errors. Unhandled exceptions (which are just a fact of life unless you're a perfect programmer) will result in the entire program terminating if you don't have something that handles it. This behavior is forced on it precisely because of the shared memory model (one of the actual big differences); if one goroutine has f'ed up, you simply don't know what the state of your program is anymore. (Theoretically you could do better than that, but not simply.) Since Erlang memory is isolated, it can kill just that one process, and the other processes can pick up the pieces. (Not necessarily perfectly or without loss, but in practice, really quite well.) Consequently, for any serious Go program, you're still going to have to choose an exception handling policy, it's not as if it has gotten away from exceptions. Failures are a fact of life... for all you know, memory was corrupted. Again, the difference here is not "error handling policy" but the longer term consequences of shared vs. isolated memory spaces. If you just type up idiomatic Erlang OTP code, you have to go out of your way to not have a bulletproof server; if you just type up idiomatic Go code it's on you to make sure you're not excessively sharing state and that you aren't going to see your entire server doing tens of thousands of things come down due to one unhandled exception. Go programmers need to be more worried about error handling in practice than Erlang programmers, since Erlang programmers aren't facing the termination of the entire world if they screw up one thing.
There's also a recurring pattern in newer language advocates in which they will in one year claim it's a good thing that they don't have X, and next year tout to the high heavens how wonderful it is that they just implemented X. I went around with the Haskell world on a couple of those issues ("no, strings are not linked lists of numbers", "yes they are you're just not thinking functionally and inductively dude, and by the way, six months later, check out this totally radical ByteString library, and when that still turns out not to be stringy enough hey, check out Data.Text six months later..."). Thinking you can get away without OTP is likely to be one of those for Go. No. You need it, though I have questions about whether it can even be built in Go, because one of the other actual differences between the languages...
... which is Channels vs. Processes. Go has channels, but you can't tell who or what has them, and there's no such thing as a "goroutine reference". By contrast, Erlang has processes, but no way to tell what messages they may send or receive, and there's no such thing as a "channel". Again, this has major impacts on how the system is structured, in particular because it is meaningful to talk about how to restart a "process" in a way that it is not meaningful to talk about how to restart a "channel".
Go advocates desperately, desperately need to avoid the temptation to explain to themselves why Erlang isn't as good, because then they'll fail to learn the lessons that Erlang learned the easy way. There's a lot of ways to improve on Erlang, but let me tell you that despite those opportunities, Erlang as a language is one of the wisest languages around, you do not want to try to exceed it by starting from scratch. Learn from it, please, I want a better Erlang than Erlang, but talking yourself into how Go is already better than Erlang isn't going to get you there.
That's fantastic advice for language advocates everywhere:
Rather than pissing on other languages, learn from them, understand what good there is in them, and figure out how to build on that.
Sometimes that's difficult: if you're forced to work 10 hours a day with shitty PHP code... you lack perspective, but even that language has some pretty good things, although (IMO) they mostly revolve around the runtime/environment and how easy it makes it to get something up and running.
For a long time, I was really into Tcl, and still think it's a cool language in many ways, but some of the people that were really into advocacy seemed to get into this mentality where there were no blemishes, only features. That kind of thinking makes you blind to what really does need fixing, and makes it difficult to evaluate things objectively.
The problem with [Char] for bulk text IO was well acknowledged when I wrote bytestring. To quote:
"The Haskell String type is notoriously inefficient. We introduce a new data type, ByteString, based on lazy lists of byte arrays, combining the speed benefits of strict arrays with lazy evaluation. Equational transformations based on term rewriting are used to deforest intermediate ByteStrings automatically. We describe novel fusion combinators with improved expressivity and performance over previous functional array fusion strategies. A library for ByteStrings is implemented, providing a purely functional interface, and approaches the speed of low-level mutable arrays in C."
Honestly, if the author is treading on 'errors vs exceptions', I don't think he has grasped the big picture yet. I'm not even finished with the book and I can tell you that. In fact, I usually hate exceptions, but with how monitoring works, I've found it quite elegant.
One other super important thing to note is that Go's channels don't work over a network. There was an effort to create a "netchan" package to do just this, but so far no one has implemented it cleanly enough to be satisfied.
It is a truly one of the few industrial, battle hardened functional and concurrent languages. I think languages that came after it and claim to have those features should at least somehow justify how they have improved on what's there.
Now ok, there is a balance. Someone spent time, and effort. Wrote a new language and open sourced it. Should we criticize them? They are giving it away and here we are telling them their work has some big warts and someone already built something similar. Yeah, it is an interesting question, and how heavy the criticism should be....
What is your opinion of Cloud Haskell? It is a library which attempts to do exactly that.
I don't mean to go against this point--it's very true, and I think it's important--but I think this conversation (and most comparisons of Go with Erlang) miss something: Erlang is a platform (it has its own VM, and basically its own OS, just relying on the outer OS as a hypervisor.) Meanwhile, Go isn't a platform, nor is it trying to be. Its designers (Rob Pike and Ken Thompson) already made the platform, first, a long time ago. It was called Unix.
These folks are serious about Unix--they want you to use it. They aren't going to reimplement Unix on top of Unix if they can help it. Unix is already the native set of abstractions they think in terms of. An Erlang "release" (VM + source) should be compared to an entire Unix VM with a disk containing some Go binaries; not just a blob of Go source on its own.
Which is to say, the equivalent of the OTP exists for Go, but it isn't in Go--it's in Unix. OTP services are fundamentally "platform-level" things, and Unix provides them. Where are supervision trees? They're in upstart(8). Where is logging? rsyslog(8) will do it. And so forth.
There's nothing wrong with treating an individual Go process as equivalent to an Erlang process (other than overhead, but that's a problem with your Unix implementation, not with Unix as a platform). Make each Go process (that is, Go binary) have a single responsibility, so it can crash on its own. Then, give it a supervisor who can restart it with the right state. Some processes will still need to be larger, of course--goroutines still serve a purpose--but when the process crashes, you'll lose all of that, so don't put everything in there.
Since you now have multiple Go processes running, you'll need to do IPC. It's Unix: do it with sockets. What do you send on them? You can import a raw struct-specifier header file (or a whole client stub library, like in Erlang) from the include/ directory of each other process that specifies types it will understand, and then speak that "protocol" to it. Or you can use a ProtoBuf spec, to make your process more friendly for third-party use. Or, you can use plain text, like most Unix processes.
A single goroutine in each Go process should manage reads from this socket, deserialize the messages coming from it, and stuff them on channels relevant to their meaning. Then, the other parts of your process that care about external messages can receive on those channels when they wake up. Sounds a lot like an Erlang process inbox, doesn't it?
And so forth.
I think a lot of people are used to languages that provide their own insular inner-platform (Ruby, Python, Erlang, C#, Java, etc.) with its own implementation of everything from process scheduling to message passing to bytecode format to exception-handling, and think that Go is another one of these. Go is not this. Go is, despite all its modern trappings, "better C", and like C, it is heavily bound to Unix for most of the operations stuff that is important to high-availability et al.
Try Go with Unix--it might change your opinion on how many decades of hard-won experience Go is leaning on :)
 Well, okay, their abstraction-set probably hews closer to Plan 9 these days; Go even uses the term "runes" to refer to Unicode code-points and so forth. Still, all the ideas are backportable without too much of a fight.
This is great, and I think many of Go's advocates - including devs at Google - underplay it. I don't understand why, particularly given the pedigree of Pike and Thompson.
My approach to building large systems in Go is based around processing pipelines, and tries to be as UNIX-y as possible. The interface types in the io package particularly fit the everything-is-a-file model, where processes can do one thing well while having their inputs and outputs connected to pipes, files, sockets, named sockets, devices, etc.
In short, building complex systems with Go components has made me a better UNIX programmer, a level that I could never quite reach in C due to all the distractions of memory management and unsafety.
For example: one system I implemented needs to take a few hundred very large CSV files every day, aggregate and sort them, perform some complex processing on them, and output a result in a very different format to many different output files. It's an extremely complex system, and each component is in Go, performing a specific task, e.g.:
* combining the many source files
* cleaning the source files
* performing some aggregation on the stream
* splitting the stream into many parts that other components can read from in parallel ('named pipes' in unix make this very nice)
* splitting the stream into many output files
etc. Every component is dumb and does one thing, but a single controlling program is responsible for handling command line arguments that describe the overall outcome and setting up the stdin and stdout stream of all of the components to create the final result.
There's a beautiful simplicity to systems implemented like this, and it means you can take advantage of existing tools like grep, awk, sed, sort, cut, etc. to do a lot of the heavy lifting more reliably and quickly than you could probably implement yourself, while still coding the overall system at a reasonably high level of abstraction. Go doesn't lead to this approach directly, but it's very pleasant working with it as a citizen of this wider environment.
In fact, let me underline that... the UNIX platform is a GREAT deal weaker than the Erlang platform on this front. It has faint shadows of what Erlang supports, which are incredibly heavyweight, far less reliable, FAR less granular, and effectively can not be used the way Erlang's can be. It's an answer, sure, but it's not even in the same league in this particular way, so don't fool yourself otherwise.
Of course UNIX has other advantages, but, well, that's why I run Erlang on UNIX, so....
I am in danger of being downvoted here as I am a bit out of my depth but after reading the go language documentation it seemed to me that go standard libraries are designed to throw exceptions internally but recover gracefully and return an error type to the caller- thus avoiding program termination- I thought that was quite a nice idiom. 'Imperfect' programmers would have to explicitly call panic without recover to cause termination?
Honestly can't imagine a better way to build fault tolerant applications without the help of a supervision tree, errors that bubble up, and fast process restarts.
Would love to try out Scala or Go though, but Erlang has served me well so far.
Not sure what they offer that's similar to OTP.
I am genuinely puzzled why people embark on building systems which they intend to be massively scalable using languages which don't have a technology like Erlang/OTP or Akka to support distribution across multiple boxes.
The fact that, as opposed to to Goroutines, Erlang processes can be transparently running on some other node is the most important difference.
Using the old deprecated netchan package doesn't count ;).
Using C++, Java or even Python for highly concurrent and fault tolerant systems is a bit like hammering screws into the wall. It can be done with enough effort -- but try a screw driver drill and see what a difference it makes.
The secret sauce is all about fault tolerance, everything else amazingly and logically leads from it: isolation, hot code reloading, distribution (running nodes on multiple machines)
Supervision trees only work because the worker processes they supervise are supervisable. You can't just bolt them on. If you are building things in threads (ie with shared state) then 'supervision' consists of closing them ALL on any error in ONE and restarting. Not so useful
That would depend upon whether the programmer linked them or not. If they're linked, the exception propagates and potentially kills the peer process. If they're nodes in a supervision tree, it's up to the supervisor as to whether to: respawn the dead process; kill the peer and then restart them both; or even to kill the peer and die itself, passing the error up the tree.
It's a matter of resource cleanup and consistency. Not so much what happens if one of the processes dies, but rather what happens if only one of them does?
On a slighly related note, you should fix the > / < in the code snippets :)