I haven’t dug into the IC papers yet, but from my understanding.
Partial-Evaluation is specialization of function over arguments. Say f(x,y,z) => v, a partially computed function f’(x,y) can be computed by binding ‘z’. Thus, applying f’ to (x,y) will yield the same value ‘v’ as f(x,y,z).
Staged-Computation is a generalization (restriction?) of partial-evaluation, but for meta-programs that generate other programs. That’s to say, the result value v of a staged meta-program M is itself a computable function. M(f(x,y,z)) the meta-program that generates previous function f can itself be decomposed to multiple steps M’’(M’(f(x,y,z))) => M(f,x,z), but having those intervening steps of computation openly accessible allows for interesting program manipulation possibilities. Which is why staged-computation subsumes everything from compiler generation, to macros, to runtime optimization.
So, Incremental-Computation (is?) looks to me like partial-evaluation + reactive programming? That is, a PE’ed function will get recomputed if any of its arguments change value? This doesn’t make in a formal model like the lambda calculus, which captures values and renames variables. So perhaps there is something other than call-by-value at work here?
(let ((two 2))
(+ x two))))
(incremental-computation:change-value 'two 3)
Can someone explain how this works actually?
50,000 lines are compiled during boot (half a second).
50,000 lines are compiled during MakeAll (half a second).
If you echo to screen, compiles slowly.