Looks like really cool tech - at a first glance reminds me of the UI.Next reactive rendering model in WebSharper/F# (http://www.websharper.com/docs/ui.next) which also has an OCaml/ML heritage.
Have been thinking about using OCaml again for a new project - does anyone know the state of libraries for common web tasks, like AWS, these days?
I've been using OCaml to build a language and runtime system for cloud orchestration[0]. As part of that, I've developed an AWS library whose code is generated from the service descriptions published by boto. It will be open-sourced at some point in the future, along with a few other things that should make developing web stacks using OCaml much easier.
If you're looking to try out OCaml, I'd say the time to do so is quickly approaching.
Great to hear, sounds like an fascinating project and look forward to trying out the AWS library. Funnily enough the project I considered using OCaml for was to build a language & runtime for container orchestration.
Yes, OCaml dev seems to be picking up again - I used to use OCaml/JoCaml a few years ago but moved to Haskell and F# in the meantime. However am very excited by the work on Flamba, multi-core and hopefully modular implicits, as OCaml always felt most natural.
I've been learning OCaml lately and I can't help but wonder (aside from the fact that it's an ML lang) why OCaml didn't capture the infrastructure tooling space and Go did. OCaml is so much fun to write, easy to reason about and has pretty good TLS/SSH libraries. Here's to hoping OCaml grows more!
Nobody ever accused OCaml of having good marketing and sexy websites (at least before ocaml.org). And for a long time, tooling was second fiddle (for instance, the existence of a package manager is a somewhat recent addition, and the least said about the standard library, the better).
Jane Street Core has to all intents and purposes solved the standard library problem. I admit I had my doubts in the past which one of Batteries or Core would be the "winner" so avoided them both and stuck with the default standard lib, but I think anyone starting a project today should start it on Core.
This may get better with the latest OCaml, but Core-powered binaries are ginormous. And AFAIK, Core doesn't work with js_of_ocaml (and I'm not sure if it works on Windows at all).
Every Var keeps track of the inputs that affect its computed value, and has an associated continuation responsible for carrying out the computation whenever an input changes (which, in turns, triggers recomputations further down in the graph).
One distinction is that Goldman's SecDb graph is really optimized around offline derivative pricing and risk calculations, which has always been one of the core strengths of Goldman's quant platform. There is a slight distinction between reactive graphs used for those approaches v. those that are more targeted toward real-time online applications.
In the implementation yes, minimizing single-event latency and maximizing batch throughput are largely mutually incompatible. But from a user/API standpoint, there's little difference.
What is the difference between a "self adjusting computation" and a "genetic algorithm," or is it merely a choice of terminology? Here is a scholarly article that uses both in the same title: http://link.springer.com/chapter/10.1007%2F978-3-540-87734-9...
It looks like a self-adjusting computation, in the sense that Jane Street and the paper they refer to uses, is similar to functional reactive programming. It looks like you can create observable values, and create observable computations on those values. The dependency graph is dynamic, which it appears is the improvement here over other FRP designs?
Genetic algorithms are a different sort of thing, and I can see how "self adjusting computation" could be used to describe either.
Functional reactive programming is not usually incremental, while self adjusting computations are not usually reactive, which is why there isn't much overlap between the concepts. Of course, you are free to mix concepts from both as you need to.
I'm not familiar with OCaml, but this looks very similar to the reactivity model used inside web front-end libraries like Knockout, Vue.js - in which each "binding" in the template is essentially a self-adjusting computation that updates the DOM. In addition, the Tracker library in Meteor enables similar dependency-tracking computations across the entire stack. It's interesting to see this type of reactivity model used in different contexts.
Quite different. To achieve the effects of "Incremental" you would need lazy evaluation coupled with a caching layer, so that successive calls to pure functions with same arguments are not re-computed. This optimization technique is in a sense similar to Common Subexpression Elimination.
Haskell does not do this. Haskell is lazy (computed results are shared), but Haskell does not memoize by default. To illustrate:
func y = let z = y + y
in (add 1 z) * (add 1 z)
in `func` the result of computing `y + y` is shared because it has been given a name: `z`. However, the result of `add 1 z` is _not_ given a name and therefore has to be computed twice. Because Haskell is pure, they will result return the same result, however the result is not saved.
This avoids needing to save the results of _all_ function calls.
That's a bad example, since GHC will perform common expressions elimination, which effectively give the same name to the two `add 1 z` (unless you specifically ask it not to, with `-O0`.)
Furthermore, Incremental isn't memoizing every function calls. It just keeps the intermediate results of the latest computation in memory, so that when the inputs change, it can determine exactly which subresults need a recomputation, and which can be reused. It's like Excel, but with dynamic dependencies.
(But yes, Haskell definitively does not do this. You could say that Haskell is about doing the least amount of work to compute something fresh, while Incremental is about working the least to recompute something we (almost) already computed just before.)
Actually, that's an interesting question. Having built similar systems before, the main reason why you would not simply run a function every time your input is updated is that you often need for your computations to keep a certain amount of state, which may depend on the input. However, as Incremental does not seem to support much in the way of stateful computations, I'm a bit puzzled wrt to what problem it addresses.
to be honest, and it's not a defensible position now that I write it out loud, but I wait for your blog post to get to HN. I have not yet found people whose writing I find compelling enough and do not either mail me or end up on HN.
Possibly I see my inbox as an RSS feed (I subscribe, you send me mails)
What seems painful about it for you? IMHO handling feeds is only painful if you have/want to switch readers or if you want to browse old archives. Other than that, you just drop a blog URL in your reader and it gets you the updates.
I use websec for a few such blogs, it emails me the content and even highlights the changed parts in the page.
It's a program I inherited after it was abandoned and did some maintenance on so I'm biased, I haven't updated it for a long while now and for a time even wrote a python alternative since I actually dislike Perl.
You can use Firefox's Live Bookmarks, which get updated every time you open the bookmark. To add an RSS feed, just go the feed page and take a look at what's on top of the page, it should be easy to understand.
Have been thinking about using OCaml again for a new project - does anyone know the state of libraries for common web tasks, like AWS, these days?