
The Racket Blog: Dynamic Programming versus Memoization - jasonwatkinspdx
http://blog.racket-lang.org/2012/08/dynamic-programming-versus-memoization.html
======
tbenst
I had Shriram as a professor - great lecturer. Here's a racket implementation
for the memoize function based off of class notes:

#lang racket

(define memo-table (box empty)) (define-struct memo (key ans))

(define (memoize f) (lambda args (local ([define lookup (filter (lambda (v)
(equal? args (memo-key v))) (unbox memo-table))]) (if (empty? lookup) (begin
(local ([define ans (apply f args)]) (begin (set-box! memo-table (cons (make-
memo args ans) (unbox memo-table))) ans))) (memo-ans (first lookup))))))

Edit: I guess HN doesn't like my formatting. Here's a readable version
<http://pastie.org/4591751>

~~~
Fixnum
Be careful -- this doesn't fully memoize recursive functions unless you force
the computation of intermediate results, since the recursive calls don't use
memoization:

(define memo-fib (memoize fib)) ;; example from SICP (memo-fib 40) ;; long
wait

I would love to see a good general-purpose 'memoize but rather doubt it's
possible, though I think I remember seeing one in Common Lisp in "Paradigms of
Artificial Intelligence Programming" that exploited CL's weird namespacing
rules for functions to make recursive functions like 'fib run fast.

~~~
kenko
You can solve the problem with the parent's memoization by writing the
original fib using open recursion and closing it with a fixed-point operator.
Doesn't help for library functions or things like that, though.

------
mattj
I think this neglects a performance consideration: dynamic programming is
often simpler for a compiler to optimize, since it's generally a set of nested
loops. Sometimes you can vectorize operations, but often you can just exploit
locality to keep all the active set in l1 cache (not a compiler optimization
per-se).

Memoization, however, is much more of a black-box. Very few compilers will
handle the branching present in memoized function calls in the same way they'd
handle a loop (which is also a branch, but a more predictable one).

That all being said, memoization is often simpler to reason about and great
for infrequent computations.

~~~
rcfox
This won't help everywhere, but sometimes you can give the compiler a hint
that a certain branch is less likely to be taken. The computation branch is
going to be slower anyway, and only happen once, so tell the compiler to
assume the lookup branch will be taken most of the time.

------
ced
Wikipedia has an implementation of Fibonacci with DP, and another with
memoization. It's a clearer explanation than the blog provides.

[https://en.wikipedia.org/wiki/Dynamic_programming#Fibonacci_...](https://en.wikipedia.org/wiki/Dynamic_programming#Fibonacci_sequence)

------
lukev
My lack of a formal CS background is hurting me here. Could someone speak to
the distinction between a tree and a DAG they make here?

I was under the impression that the terms were more or less synonymous.

~~~
nandemo
A tree is a DAG that is connected: if _a_ and _b_ are nodes of a tree, there
is a path between _a_ and _b_. But in general a DAG doesn't have to be
connected. So all trees are DAGs but not all DAGs are trees: some are forests
i.e. unions of trees.

~~~
ot
That's wrong, not all connected DAGs are trees.

A DAG is a directed graph that has no (directed) loops. Consider the
following:

    
    
         A
        / \
       v   v
      B     C
       \   /
        v v
         D
    

This is a DAG but not a tree, because it has an undirected loop but no
directed loops.

~~~
nandemo
You're totally right (I had a thinko and managed to miss the D in the DAG). My
apologies for the misinformation.

------
drunkpotato
This is a really interesting way to think about computations. I mostly think
of memoization for I/O things like database lookups, or long computations. I'd
never thought of it in comparison to dynamic programming algorithms.

~~~
spaghetti
Memoization and dynamic programming come together in top down DP algorithms.
These are also referred to as recursion with memoization. The recursive
function often looks something like:

    
    
        f(arguments) {
            results = memoTable[arguments]
    
            if(results) // use results
            else
                results = // expensive recursive call to f
                memoTable[arguments] = results
        }

~~~
oskarkv
As far as I have understood, this is just regular memoization. I mean, often
in functional languages one just turns a regular function into a memoized
version by calling memoize(function). Sometimes that works to create a
function that does what your code does, and sometimes it does not. When it
does not I think that's a shortcoming of the language/implementation.

------
hobbyist
Dynamic programming is an optimization technique using divide and conquer,
whereas memoization is an added optimization in the dynamic programming
paradigm. An optimization we do while calculating fibonacci sequence is
memoization, to prevent from going down the tree to the leaf every time.

