
Function Inheritance Is Fun and Easy - ingve
http://homes.cs.washington.edu/~asampson/blog/functioninheritance.html
======
transfire
It is very clever, but I much prefer the idea of building AOP into a language.
Code will be much cleaner and probably a bit more optimized as well.

I find it interesting how category theory and higher-order functions are to
functional programming what design-patterns are to OOP.

~~~
vezzy-fnord
Was about to mention aspect-oriented programming myself, the pattern here
strongly reminded me of it.

------
csense
I've read a couple recent articles about the Y combinator on HN, and it seems
like an interesting theoretical concept for e.g. showing that lambda calculus
is Turing-complete.

But while it might be useful in minimalist contexts like the lambda calculus
without much built-in functionality, what problem is it solving in this
context?

It seems like a much more straightforward way to accomplish the same thing,
without needing the Y combinator, would be (in Python syntax):

    
    
        def trace(f):
            def wrapper(*args):
                print("called with args", args)
                return f(*args)
            return wrapper
    
        fib_trace = trace(fib)
    

Why is the Y combinator needed for this? I don't get it.

~~~
transfire
Because fib is recursive, pretty sure you will only get one printout, not one
for each call to fib.

~~~
Lord_DeathMatch
As it is here, yeah, only once at the root level. But if you were to use it as
a decorator on the original function, and reassign to the original name (as
the decorator syntax does), it would print at every level, as the recursive
call would bind to the decorated function.

------
dwenzek
Nice. But the implementation has some subtleties. I use OCaml.

Fist, `fix f` must explicitly be defined as a function. If we replace the
expression `fun n -> f (fix f) n` by `f (fix f)` then we got "Stack overflow
during evaluation" of fix(f).

    
    
        let rec fix f = fun n -> f (fix f) n
    

Then fix can be applied to some generator :

    
    
        let fibgen(fib) =
          fun n -> if n = 0 then 0
                   else if n = 1 then 1
                   else fib(n-1) + fib(n-2)
    
        let fib = fix(fibgen)
        assert (fib 6 = 8)
    

The compose and trace functions pose no issue.

    
    
        let (>>) f g x = f(g(x))
    
        let trace f_name f = fun n ->
          Format.printf "%s(%d)\n%!" f_name n;
          f n
    
        let trace_fib = fix (trace "fib" >> fibgen)
    

But the memoize function cannot simply be defined as `memoize f = fun n ->
...`, otherwise a fresh cache will be used for each call ! Indeed, `fix f` is
called for each recursive call in the generator. In the case of `fix (memoize
>> fibgen)` memoize is recursively called with a new function to be cached.

Hence, the use of an extra step to explicitly create once the cache.

    
    
        let memoize size =
          let cache = Hashtbl.create size in
          fun f n ->
          try Hashtbl.find cache n
          with Not_found -> (
            let r = f n in
            Hashtbl.add cache n r;
            r
          )
    
        let memoize_fib = fix (memoize 16 >> trace "fib" >> fibgen)
    

Last subtilty: the order of application of the misc transformers. When we
write `memoize 16 >> trace "fib" >> fibgen`, the resulting function `f` adds ,
in order, memoization, trace and fib recursive computation to itself in
disguise `fix f`. The latter applied transformation takes the control. Here
the fib computation only calls the chain `fix f` if n is greater than 1. So, a
call to `trace_fib(0)` leads to no trace; and a call to `memoize_fib(6)`
caches the results for 4 and 5 but not for 6.

Hence, to memoize all calls and trace only actual call to fib, we must reorder
the transformations:

    
    
        let memoize_fib = fix (fibgen >> trace "fib" >> memoize 16)

