

Exponential to Linear Complexity using Memoization - amjith
http://amjith.blogspot.com/2012/02/memoization-decorator.html

======
dasht
There is a terminology problem with that article. People do get casual (or
sloppy) with these terms, but its worth keeping them distinct:

The note describes memoization as using a "cache". That's a problem because
memoization and caching are both useful, but they are different things.

A memo (which it appears is implemented by the Python library you mention)
stores the results of earlier computations for re-use -- just as you've
described.

A cache does the same thing _except_ that if a cache becomes over-full, some
earlier cached results may be recomputed (because they were dropped from the
cache).

There are two critical differences:

Minimum space requirement for a memo: O(N x S) where N is the number of unique
computations to be performed where S is average size of parameters and
results.

Minimum space requirement for a cache: arbitrary, although performance varies
with size.

Number of repeated computations with a memo: 0

Number of repeated computations with a cache: at least N where N is the total
number of _unique_ computations, and as many as T where T is the total number
of computations with duplications.

A cache is useful in contrast to a memo when N*S is uncomfortably large, but
"most" of the repeated computations are members of a set of size M where MxS'
<< NxS where S' is the average size of params and results in the M-set.

A memo is useful where NxS is conveniently small, or where it is unacceptable
to repeat any computation for some reason.

~~~
amjith
Thank you for the explanation. I'll update the post.

Is it possible in Python to have a fixed size dictionary? I thought about the
repercussions of the size exploding, but I couldn't find anything readily
available to restrict the size.

