Shriram Krishnamurthi explains it best:
Solving it with recursion/memoization vs. bottom-up is merely an implementation detail, while DP refers to a class of algorithms.
EDIT: Corrected definition of DP.
I dont think that's sufficient? I thought DP also implies you actually reuse the answers from subproblems.
"There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and overlapping sub-problems. If a problem can be solved by combining optimal solutions to non-overlapping sub-problems, the strategy is called "divide and conquer" instead"
And Krishamurthi's definition is the clearer I've seen that doesn't include "memoized recursion" as a subset of Dynamic Programming.
Memoization is a great general-purpose technique that covers pretty much all the obvious cases for DP to a reasonable standard. Its black-box nature allows for effective and well compartmentalised application.
DP is an extremely complicated and invasive technique that requires rewriting algorithms to optimise for space & time use.
It doesn't seem like the advantages of DP are enough to outweigh the neat design of memoisation. I'd rather see an interview question on implementing memoize in Python than writing a DP algorithm.
Memoization is specific form of caching for functions.
That is all there is to it. Your link seems to conflate memoization with a global store that you must apply in an all or nothing fashion for functions. You do not, and calculating the n'th fibonacci number is a great example of how to use memoization without using O(n) space.