

Memoization explained using a python implementation of fibonacci - code_devil
http://geeklogs.posterous.com/what-is-memoization

======
vannevar
The main reason people need to keep explaining memoization is that they insist
on using the unnecessarily obfuscating academic term "memoization", instead of
the more commonly understood term "caching".

~~~
noblethrasher
There is a subtle difference though. With general caching you have to worry
about stale data. Since memoization applies to pure functions where a given
input always results in the same output, "cache invalidation" isn't much of an
issue.

~~~
vannevar
FTA:

"Memoization is a computer science concept to optimize programs by avoiding
computations that have already been done and to reuse it. This is achieved by
storing the computaions in a lookup table and retrieving them if a need for it
arrives in a future computation step."

From Wikipedia:

"In computer engineering, a cache (/ˈkæʃ/ kash[1]) is a component that
transparently stores data so that future requests for that data can be served
faster. The data that is stored within a cache might be _values that have been
computed earlier_ or duplicates of original values that are stored elsewhere.
If requested data is contained in the cache (cache hit), this request can be
served by simply reading the cache, which is comparatively faster." (Emphasis
mine.)

~~~
noblethrasher
You're right that memoization is a subset of caching but it still warrants
special attention because every pure function gets to have a .Memoize()
"method" for free. This isn't true for caching in general.

~~~
vannevar
And a red car is a subset of all cars, but we don't make up a special term for
it. Many caching techniques have implementation details that distinguish them
from one another, but we nonetheless refer to them by the same term. Caching
from a database is different from caching in a CPU, but the basic mechanism
and purpose is the same so we use the same term for clarity. I've never seen
an article on memoization that couldn't be made more clear by substituting
'caching' wherever it used 'memoization'.

------
xenomachina
This isn't right:

    
    
        fib_val = fib_mem(n-1) + fib_val(n-2)
    

The second term on the rhs should be fib_mem(n-2) or perhaps even
fib_list[n-2].

~~~
code_devil
Thanks. Fixed it.

------
alexis-d
Note that we can use the @functools.lru_cache(None) to easily benefit from
memoization.

------
gbog
The code in this post seem a bit unpythonic and "improvable" for me:

\- "if len(list) > n" should probably be "if n in list".

\- The function has a side effect (changing the list), this is to be avoided
when avoidable

\- Memoization is not part of the logic, thus is better done with a decorator
(which would solve above side effect issue)

------
randlet
I like to use dicts for memoization...This runs about 8% faster on my machine
than OP's solution:

    
    
        memo5 = {0:0,1:1}
        def fib_mem5(n):    
            try:
                return memo5[n]
            except KeyError:
                memo5[n] = fib_mem5(n-1)+fib_mem5(n-2)         
            return memo5[n]
    

Note: the above version runs faster than the cleaner:

    
    
      memo6 = {0:0,1:1}
      def fib_mem6(n):    
          if n in memo6:
              return memo6[n]    
          memo6[n] = fib_mem6(n-1)+fib_mem6(n-2)         
          return memo6[n]

~~~
infinitegalaxy
How much faster? As you stated, the second version is much cleaner; I'd prefer
to use it if it isn't that much of a performance hit.

Edit: hm, in my casual testing, the 'cleaner' version takes about 10% longer.

------
Tobu
Fibonacci really doesn't need memoization since you can implement it
efficiently by keeping just the last two values on the stack. Dynamic
programming (like the knapsack problem) is a better example.

------
pbh
Is there a way to fix the recursion depth problem with this solution?

To me, the memoized version is more clear and should be just as efficient as
the iterative version. However, this version of fib_mem dies before
fib_mem(1000)!

Apparently, you can increase the recursion depth limit using the sys module.
Is that the right thing to do? And, if you do it, are there guidelines for
doing so without causing memory problems?

~~~
pash
Yes, use sys.setrecursionlimit(n) to change the max recursion depth. Max
supported stack depth of course varies by platform.

In many cases, a better solution is to implement tail recursion, which can be
done quite easily. See <http://paulbutler.org/archives/tail-recursion-in-
python/> .

------
mattmight
You can also memoize recursive functions transparently with a memoizing fixed-
point (Y) combinator:

[http://matt.might.net/articles/implementation-of-
recursive-f...](http://matt.might.net/articles/implementation-of-recursive-
fixed-point-y-combinator-in-javascript-for-memoization/)

The example is also Fibonacci.

------
ramblerman
I like this article on memoization - <http://www.agillo.net/getting-groovy-
with-fibonacci/>

The diagrams make it easy to understand. It is in groovy though.

