With timeout channels, the throttler is a beautiful 7 lines of clear code - compare to the Underscore.js imperative mess: http://underscorejs.org/docs/underscore.html#section-64
Very exciting times.
EDIT: Oh and don't forget you can pattern match on the result of a select (alt!) with core.match to get Erlang style matching http://github.com/clojure/core.match :) All this from libraries
That being said, I'm not clear if any of these differences are important to most people's use cases. I suspect that Lamina channels can behave like a core.async channel without much trouble, so it wouldn't bother me much if core.async became the primary interface for these sorts of things. If someone cares a lot about throughput or other distinguishing factors, they can just look under the covers.
I am, however, a little curious how well the goroutine mechanism works in terms of being able to sample the state of the various tasks. Without some equivalent for "jstack", they become something of a black box.
You can maybe create propagation operators that make these relationships explicit, but that would require either eschewing the bare take/put methods, or making sure the take/put behavior is in sync with whatever metadata you use to describe it.
I don't see this as much of a downside though, considering the issues one can have with selective receives.
All the previous Clojure concurrency constructs have made my programs significantly simpler and more robust. I forgot what a deadlock is. And I really like the practical approach that Clojure takes.
For multiple value channels my head keeps going to wanting to have map, filter, etc with the channels and I'm thinking I'm missing something because that would just be creating Rx / Observables.
I should also mention a key difference between channels and Rx/Observables: The fundamental operations for observable sequences are subscribe & unsubscribe. The fundamental operations for channels are put and take. In the push sequence model, the consumer must give a pointer to itself to the publisher, which couples the two processes and introduces resource management burdon (ie IDisposable).
You can think of a pipeline from A to B in the following ways:
Sequences: B pulls from A
Observables: A pushes to B
Channels: Some process pulls from A and pushes to B
That extra process enables some critical decoupling!
Very interested in this. The trivial Go examples seem much more channel as side effect.
While I still need to wrap my head around the differences (and thanks for the explanation) one quick take away is how much easier it would be to write the higher order operations with core.async channels versus observables.
I think I understand the argument; just not sure I'm convinced by it. As I see it, the argument goes like this: Suppose you want to set things up so that block "A" is sending messages to block "B". With channels, block "A" just sends a message to a channel that it was handed; it doesn't know or care what is at the other end of the channel. The higher-level code that set things up created a channel, and then passed that channel to both "A" and "B". So, everything is loosely coupled, which is great.
With actors, on the other hand (so the argument goes), you would have to create actors "A" and "B", and then send "A" a reference to "B", saying, "Hey 'A', here is the actor to which I want you to send your output." So now "A" and "B" are wired up, but there is no explicit "channel" object connecting them. I think the argument is that this is worse than channel, because there is a tightly-coupled connection between "A" and "B".
But I don't think there is. In my scenario, some higher-level object still had to create "A" and "B" and wire them up; so, they are loosely coupled.
I think it may, perhaps, be true that an actor system lends itself more readily to tight coupling than a channel system does -- in other words, an actor system might lead the casual programmer down the wrong path. Is that what Rich meant when he said of actors, "Yes, one can emulate or implement certain kinds of queues with actors (and, notably, people often do), but..." ?
If I'm missing something, I'd love to be enlightened!
Inference : ClojureCLR seems to be dead :(
Project Source: https://github.com/clojure/clojure-clr
For at least a few years now.
I remember watching a Hickey talk where I think I recall him saying it was even before 2011, but the exact name escapes me at the moment. However, I was able to find this snippet from 2011:
Fogus: Clojure was once in parallel development on both the JVM and the CLR, why did you eventually decide to focus in on the former?
Hickey: I got tired of doing everything twice, and wanted instead to do twice as much.
Years before the first release of Clojure, Rich used to maintain a parallel build of Clojure for both platforms before concentrating on the JVM; but that is ancient history, and isn't relevant to ClojureCLR today.
ClojureCLR is a port maintained by David Miller, Rich doesn't work on it, so it isn't surprising that he hasn't made any comments about it.
Though it is true that the CLR implementation has much less adoption than the JVM or JS implementations.
Thanks for clarifying!
Will core.async be similar in it's implementation complexity?
The IOC go macro is huge (about 500/600 loc) but it's pretty straight forward. Its really nothing more than a micro compiler that pareses clojure, does some analysis on it, and spits it back out as clojure. If you've done any work with compilers it should be very easy to understand.
Bacon.js is a library that exposes a similar technique (FRP), and their readme outlines the benefits in tangible terms
Aside from that, go blocks do slow down the code they contain a bit. From my tests, this slow down is within 1.5-2x the original code. However since go block shouldn't be looping a ton, this shouldn't matter. Functions that are called within go blocks are not modified, and so retain their original performance semantics.
I hope that helps.