Is the author trying to redefine what Dynamic Programming means or is he just totally confusing its reader by actually using a DP algo but saying he's not using the technical definition!?
I mean: he's basically saying that Dynamic Programming is "Divide and Conquer". I'm not sure I follow him.
Especially seen that, contrarily to memoization (yes, "memoization", not "memorization", there's no 'r'), DP doesn't just compute and cache what's necessary but computes everything. In that DP really doesn't look like "a subproblem which is easier to solve".
Dynamic programming has a very precise meaning:
Dynamic Programming can be seen as divide and conquer but using memoization to avoid recalculations.
The "subproblems easier to solve" are not easier complexity-wise but easier in the sense that we are solving a smaller subinstance of the main problem.