Hacker News new | comments | show | ask | jobs | submit login
Clojure core.async Channels (clojure.com)
208 points by siavosh 1608 days ago | hide | past | web | 44 comments | favorite

The ClojureScript support needs work but you can already express some incredible things, those familiar with channels in Go should be dropping their jaws that you can do this in the browser:


With timeout channels, the throttler is a beautiful 7 lines of clear code - compare to the Underscore.js imperative mess: http://underscorejs.org/docs/underscore.html#section-64

I've been doing sophisticated JS UIs for 8 years now and had largely given up on ever being able to really manage async complexity - neither JavaScript Promises nor ES6 generators provide this level of process coordination and clarity.

Very exciting times.

EDIT: Oh and don't forget you can pattern match on the result of a select (alt!) with core.match to get Erlang style matching http://github.com/clojure/core.match :) All this from libraries

This has me super excited to build something cool with ClojureScript in the near future. I'm used to using lamina (https://github.com/ztellman/lamina) in Clojure, but for working with the browser this is a whole new world.

I would be interested in seeing a comparison to lamina. I haven't had a chance yet to think about it properly but my first impression is that core.async is strictly more expressive but it couldn't implement something like lamina's flow diagrams.

CSP is message-oriented, Lamina is more stream-oriented. One implication of this is that in Lamina multiple threads can traverse the stream topology without any contention, but in core.async this isn't possible. Another is that the topology is explicitly described in Lamina, but it's implicit in core.async.

That being said, I'm not clear if any of these differences are important to most people's use cases. I suspect that Lamina channels can behave like a core.async channel without much trouble, so it wouldn't bother me much if core.async became the primary interface for these sorts of things. If someone cares a lot about throughput or other distinguishing factors, they can just look under the covers.

I am, however, a little curious how well the goroutine mechanism works in terms of being able to sample the state of the various tasks. Without some equivalent for "jstack", they become something of a black box.

Do you mean in constructing networks or visualizing them? I see no reason why you couldn't do either with core.async.

Can you walk the chain of consumers and where they're forwarding the messages? I didn't see anything like that in the code, but you're far more familiar with it than I am.

No, that's not there. I just mean conceptually there is no reason we won't have those kinds of debugging, visualization, etc tools eventually. This is a work in progress. Zach has done a phenomenal job with that stuff in Lamina.

It seems to me that there are real obstacles to that; since the propagation of messages from one channel to another requires a separate 'take' and 'put', there's some halting problem-style obstacles to figuring out the causality of how messages are propagated.

You can maybe create propagation operators that make these relationships explicit, but that would require either eschewing the bare take/put methods, or making sure the take/put behavior is in sync with whatever metadata you use to describe it.

That's the kind of thing I was thinking about. The topology in lamina is described by data whereas in core.async it's described by code. I'll have to experiment more with both and see where the tradeoffs lie.

It seems possible to me to build the lamina style topologies on top of core.async.

In fact, that seems preferable to me.

I've made a debounce example and compare to an Angular.js promises encoding: http://gist.github.com/swannodette/5888989

Using core.match with core.async does sound promising! Am I correct that Erlang-style selective receive would NOT be possible?

I don't see this as much of a downside though, considering the issues one can have with selective receives.

Yes I'm pretty sure selective receive is not supported.

You can mimic selective receive using a router process. Essentially, you could generate one channel for every branch of the pattern match, then generate a process which reads from the input channel and deals the message to the appropriate branching channel.

The last sentence is great: "I hope that these async channels will help you build simpler and more robust programs." — I think it expresses the Clojure philosophy very well.

All the previous Clojure concurrency constructs have made my programs significantly simpler and more robust. I forgot what a deadlock is. And I really like the practical approach that Clojure takes.

In case it's useful, I've been working on this code-oriented companion walkthrough for this post: https://github.com/clojure/core.async/blob/master/examples/w...

It is, thank you.

For working with single value channels having go be an expression seems super handy.

For multiple value channels my head keeps going to wanting to have map, filter, etc with the channels and I'm thinking I'm missing something because that would just be creating Rx / Observables.

You can build analogous versions of map/filter/etc on top of channels. The core.async team is still focused on the primitives, but surely some higher order operations will follow in the not too distant future. I suspect we'll need to wait and see what patterns develop, since they will surely differ from Go a little bit, due to the functional emphasis on Clojure.

I should also mention a key difference between channels and Rx/Observables: The fundamental operations for observable sequences are subscribe & unsubscribe. The fundamental operations for channels are put and take. In the push sequence model, the consumer must give a pointer to itself to the publisher, which couples the two processes and introduces resource management burdon (ie IDisposable).

You can think of a pipeline from A to B in the following ways:

Sequences: B pulls from A

Observables: A pushes to B

Channels: Some process pulls from A and pushes to B

That extra process enables some critical decoupling!

> since they will surely differ from Go a little bit, due to the functional emphasis on Clojure

Very interested in this. The trivial Go examples seem much more channel as side effect.

While I still need to wrap my head around the differences (and thanks for the explanation) one quick take away is how much easier it would be to write the higher order operations with core.async channels versus observables.

I took a crack at this, if you're still interested:



I think this is extremely cool, but I'm struggling to understand the disadvantage of actors compared to channels.

I think I understand the argument; just not sure I'm convinced by it. As I see it, the argument goes like this: Suppose you want to set things up so that block "A" is sending messages to block "B". With channels, block "A" just sends a message to a channel that it was handed; it doesn't know or care what is at the other end of the channel. The higher-level code that set things up created a channel, and then passed that channel to both "A" and "B". So, everything is loosely coupled, which is great.

With actors, on the other hand (so the argument goes), you would have to create actors "A" and "B", and then send "A" a reference to "B", saying, "Hey 'A', here is the actor to which I want you to send your output." So now "A" and "B" are wired up, but there is no explicit "channel" object connecting them. I think the argument is that this is worse than channel, because there is a tightly-coupled connection between "A" and "B".

But I don't think there is. In my scenario, some higher-level object still had to create "A" and "B" and wire them up; so, they are loosely coupled.

I think it may, perhaps, be true that an actor system lends itself more readily to tight coupling than a channel system does -- in other words, an actor system might lead the casual programmer down the wrong path. Is that what Rich meant when he said of actors, "Yes, one can emulate or implement certain kinds of queues with actors (and, notably, people often do), but..." ?

If I'm missing something, I'd love to be enlightened!

And this shows how cool it is to have languages with minimal builtin constructs and powerful abstractions for library writers.

Interesting -- this looks similar to Lamina [1], which I've used an really like.

1: https://github.com/ztellman/lamina

People mention the JVM, which is the primary clojure platform. People say that it is complemented by Clojurescript on the client side.

Inference : ClojureCLR seems to be dead :(

It's not dead yet. David Miller has been untiringly maintaing Clojure's CLR implementation for years now. Clojure CLR right now has feature parity with Clojure JVM. core.async is still alpha (it's also an external lib), I bet that once it reaches some stability David will add support for CLR in a weekend (as it has been usually). Unfortunately that doesn't mean there is much community interest in the CLR port though, but David's relentless stewardship of the project is relentless.

Example: https://groups.google.com/forum/#!topic/clojure/Kvefcae2W6s

Project Source: https://github.com/clojure/clojure-clr

> Inference : ClojureCLR seems to be dead :(

For at least a few years now.

I remember watching a Hickey talk where I think I recall him saying it was even before 2011, but the exact name escapes me at the moment. However, I was able to find this snippet from 2011:

Fogus: Clojure was once in parallel development on both the JVM and the CLR, why did you eventually decide to focus in on the former?

Hickey: I got tired of doing everything twice, and wanted instead to do twice as much.


No, that isn't right.

Years before the first release of Clojure, Rich used to maintain a parallel build of Clojure for both platforms before concentrating on the JVM; but that is ancient history, and isn't relevant to ClojureCLR today.

ClojureCLR is a port maintained by David Miller, Rich doesn't work on it, so it isn't surprising that he hasn't made any comments about it.

Though it is true that the CLR implementation has much less adoption than the JVM or JS implementations.

Oh, I wasn't aware that ClojureCLR was still being maintained. All I knew was the Hickey was no longer maintaining it.

Thanks for clarifying!

I spent a decent amount of time looking at Lamina, and while I liked the concept, the implementation was, imho, a very complex set of macros and dynamically built Java types and interfaces. It did not feel like the "clojure way" and just following the code of a single message send was painful.

Will core.async be similar in it's implementation complexity?

The channels themselves are a bit hard to understand because there's a careful dance of locks/unlocks to get it all to work correctly, but it shouldn't be hard to figure out.

The IOC go macro is huge (about 500/600 loc) but it's pretty straight forward. Its really nothing more than a micro compiler that pareses clojure, does some analysis on it, and spits it back out as clojure. If you've done any work with compilers it should be very easy to understand.

I believe you have the wrong case there. The source [0] is available for your review.

[0] https://github.com/clojure/core.async/

Can someone explain the benefits of channels vs event handlers for frontend development in a non-abstract way like I'm five?

Channels allow you to write code that "pulls" events off of a stream or enumerate-like structure. This can dramatically simplify event-heavy front end code.

Bacon.js is a library that exposes a similar technique (FRP), and their readme outlines the benefits in tangible terms


It's not quite clear what patterns will emerge with respect to UI event handlers, so there really isn't an ELI5 answer yet in the View layer. The backend advantages are well covered by the Go language community, so check out Rob Pike's talk [1]. As for other parts of the frontend, like the model layer, channels can lead to a major simplification for the callback hell involving multiple coordinated network requests or for transforming and processing events off a message socket, like that of Socket.io.

[1]: http://talks.golang.org/2012/concurrency.slide#1

A more detailed explanation on how those IoC threads work would be awesome too if someone's going to be explaining things to children would be great.

The IoC threads work by converting park-able functions into Single-Static Assignment (SSA) form [1] and then compiled to a state machine. Essentially, each time a function is "parked", it returns a value indicating where to resume from. These little state machine functions are basically big switch statements that you don't need to write by hand. This design is inspired by C#'s async compilation strategy. See the EduAsync Series [2] on John Skeet's blog, and the compilation post [3] in particular. Once you have these little state machines, you just need some external code to turn the crank.

[1]: http://en.wikipedia.org/wiki/Static_single_assignment_form

[2]: http://msmvps.com/blogs/jon_skeet/archive/tags/Eduasync/defa...

[3]: http://msmvps.com/blogs/jon_skeet/archive/2011/05/20/eduasyn...

These are all excellent, that wiki page is surprisingly easy to understand. Thanks.

My impression is that the IOC threads spare you from needing to write continuation-passing-style code by hand and let you program in a more idiomatic style instead. This would be specially useful if you have lots of jumps in your control flow such as loops, return and break statements.

What are the limitations of the GO macro, which I think makes these control flow manipulations possible?

There really aren't any semantic limitations of the go macro. About the only thing that doesn't work, is that you can't use the "binding" macro inside of a go macro (but using one outside the macro will work as expected). Go block translation stops at function boundaries, so you can use a for loop inside a go, but since the for returns a lazy seq, putting takes inside the body of a for doesn't really make sense (or work), instead, use dotimes or a loop/recur.

Aside from that, go blocks do slow down the code they contain a bit. From my tests, this slow down is within 1.5-2x the original code. However since go block shouldn't be looping a ton, this shouldn't matter. Functions that are called within go blocks are not modified, and so retain their original performance semantics.

I hope that helps.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact