Hacker Newsnew | comments | show | ask | jobs | submit login

I haven't really written about this anywhere, and the code in its current state is far from self-documenting, so I'll give it a shot here.

The code transformation is pretty much all local. Since everything in Clojure is an expression (this is not true of Python or JS, I'm not sure about F#), I don't need to perform much analysis, I just need to make sure that each expression doesn't care about the difference between a future and a realized value.

Let's assume three primitives:

* an asynchronous future [1]

* something which merges a list of futures and realized values into a single future representing a list of values

* something which takes a function and a future, and returns a future representing '(apply fn value-of-future)', that will be realized once the future is realized

All we need to do is take every expression where a function is called, merge the arguments together, and apply the final primitive to the expression. The expression will return a future, which can then be picked up by the parent expression, and so on.

This glosses over how special forms are handled, and how the 'force' functionality works. Also, as I mentioned in the presentation, how the code executes is pretty opaque, and some sort of higher-order analysis would be nice so that it could be visualized using graphviz or something.

This is still an experimental feature, and will probably remain so until the 1.0.0 release of Lamina (I'm working on 0.5.0 right now). I'm not aware of any other macro that rips apart normal Clojure code quite as completely, which means that this is sort of uncharted territory. I want to make sure I've avoided all the pitfalls before calling it ready for production usage.

[1] https://github.com/ztellman/lamina/wiki/Introduction




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: