That was quite naturally a filter graph: every component was time-dependent, caching and logging was important (rewind to find this blob 5 minutes ago), and parallelism was crucial because the system had to deal with synchronizing multiple IP cameras.
Amazingly, it was possible to do this in an entirely safe way in C++, with templates.
I wish I could put the code up on github, but unfortunately the company got taken apart and sold, so the code is in limbo.
Here is a more in depth link: http://stackoverflow.com/questions/1894209/how-to-read-menta...
but the idea is, delete all the parentheses from you mind, the code should still make sense. The parentheses are to help the computer out, not you.
I can still see them if I need to but they have the same visual weight as tabs/end of line spaces. I like this because it makes it feel more like python to me.
I'm building a stream-processing system much like Riemann in Python and the user configuration is built on co-routines. The stream is a graph of co-routines essentially (although typically with only one input and many possible buckets). I tend to think of it as data flowing through functions.
It might look like:
stream = when(lambda event: event['metric'] > 2.0,
by(lambda event: (event['host'], event['metric']),
Anyway I wasn't suggesting Graph was somehow trivial or something. The OP claims to be a Python programmer and I was suggesting Python options.
Is the special-purpose declarative syntax useful in its own right?
I can believe that the actual LISP data structure of a function may not be the most convenient to work with for what you're doing, but it seems like you ought to be able to translate from LISP code into whatever graph structure you want, as long as all the function calls are pure. Or are there reasons this isn't feasible?
Graph forces you to make the steps that you care about explicit, and in exchange you get a nice way to observe, reason about, and change your code in terms of these steps. The goal is to make the overall process as clear and non-magical as possible, while incurring as little programmer overhead as possible.
I think it's a really interesting project to attempt to provide similar tools over ordinary functions, but that seems like a much loftier goal -- Graph is pragmatic, simple, and it works now :).
I feel like we still don't know what more traditional tools and workflows look like from the graph point-of-view.
Edit: cool project, btw.
Graph backtraces look like ordinary stacktraces -- the compiled output is basically the same if you wrote the function by hand.
For logging, we wrap each node in an 'observer' with the path through the graph injected, which automatically records execution time and exceptions from each node, and lets you spit stuff out to the dashboard that will appear in the graph structure. There's an example of this in the graph_examples_test.clj, I believe.
Here's some WIP on a more practical parallel compilation: https://gist.github.com/w01fe/4710008
Once the kinks get worked out this will go into the OSS project. Presumably concurrency level will be controlled by a parameter, and/or passing an appropriate ExecutorService.
From there, you can create all sorts of adapters and the "graph" engine just farms out the execution to the appropriate adapter. This could be anything from a job daemon on a batch farm, or, say, a typed ThreadPoolExecutor on a machine (which can either execute native clojure/java, or any sort of other script language.