

Ask HN: Have you ever used Dynamic programming in your dev job? - questionAlgo

The reason I ask this, is not to belittle the importance of algorithms- but to know more about the number of people who actually get to work on  "challenging" algorithms-Graph algos/dynamic programming in your  job, and if so, what was the "coolest" algo you ever implemented!
======
tptacek
At various times during the year, we're hip deep in graphs; much of binary
software security analysis is done over the graph of edges joining basic
blocks --- series of assembly instructions that end in a transfer of control.

I'll concede that the _algorithms_ we get to use for this are pretty boring;
pretty much Sedgewick's Algorithms-in-C level of sophistication.

Halvar Flake at Zynamics has a whole team doing genuinely interesting graph
theoretic analysis of binaries, for instance to diff out otherwise-secret
security flaws from Microsoft patches.

------
mhill
I've never had a chance to use dynamic programming in real life beyond school
work. For "challenging" algorithm work in real life, I did topological sort
for evaluating dependency graph, bloom filter for skipping lookup, NFA for
executing regex-like rule graph, Rete for faster rule evaluation, extendible
hashing for fast index storage, complex event flow graph with SQL-like nodes.
Those are the ones I can remember now.

BTW, I did got bit by O(nlgn) vs O(n^2) issue, despite computers being fast
nowaday. Once I used bubble sort in a cache system. It was quick and simple.
Things work fine for small size cache but slowed down substantially when size
> 10,000; n^2 => 100,000,000 ops. Switching to heap sort did solve the
problem. So yes, understanding algorithmic complexity does help in real life
work.

However, 99% of development are run of the mill coding.

------
J-L
\- edit distance for various projects (OCR post-correction, spelling
correction, word alignment of bilingual text) \- Viterbi algorithm in text
part-of-speech disambiguation \- Viterbi algorithm in speech recognition
decoder

(That's admittedly not a lot of DP since the days I've started to code for $$
as a student in 1994.)

~~~
praptak
Same here. Edit distance for fuzzy matching of strings to strings.

I suspect that edit distance accounts for 83.52 percent of dynamic programming
wordwide.

------
rcfox
Sure, I've used dynamic programming in Perl code:

use Memoize; memoize('myfunc');

~~~
chancho
The only thing dynamic going on there is the type system, unless myfunc is
recursive. Dynamic programing uses memoization to achieve lower computational
complexity, but memoization != dynamic programming.

~~~
rcfox
Memoization is top-down dynamic programming.

<http://en.wikipedia.org/wiki/Dynamic_programming>

<http://xw2k.nist.gov/dads//HTML/dynamicprog.html>

<http://www.cs.cmu.edu/~avrim/451f09/lectures/lect1001.pdf>

<http://www.algorithmist.com/index.php/Dynamic_Programming>

~~~
SeanLuke
> Memoization is top-down dynamic programming.

It really is not.

Memoization has the same time complexity as dynamic programming. But it has
much worse space complexity.

First some approximate definitions: The tasks to which memoization and dynamic
programming are typically useful are those in which a problem can be
decomposed into subproblems, and subsubproblems, and so on, where
problem->subproblem dependency is many-to-many, not one-to-many. That is, the
dependency graph is a DAG, not a tree. By _memoization_ I mean the process of
computing the solution to a function by first looking up in a hash table to
see if the solution has already been found, and only computing the solution
and storing it if it has not been found. By _dynamic programming_ I mean the
bottom-up process of first solving all of the lowest level decomposed
subproblems, then solving the next higher level subproblems using the
solutions found at the lower level, then the next higher level using the
solutions found so far, and so on up to the desired top-level problem.

Whether memoization will do the job depends on the nature of your task. In
some tasks the higher-level problems depend on simultaneously knowing the
answers to all of the subproblems at every level. For those kinds of problems
memoization will be just fine.

But for _many_ dynamic programming tasks, you don't need to keep around all
the low-level problems: just the most immediate ones. For example, it's often
the case that once you've computed problem layer N, you can get rid of all the
layers 0...N-1. They're not necessary to compute problem layer N+1. For these
kinds of problems the space cost of dynamic programming is just in keeping
that immediate layer N. But memoization has no way of forgetting these layers,
because not only is it top-down, but it's also typically depth-first. As a
result, by the time you solve the final layer, you have _all of the previous
layers_ stored in your hash table.

~~~
sleepdev
I've always thought of dynamic programming as memoized computations +
ordering. Maybe memoization isn't quite as efficient as dynamic programming
but it feels like it is halfway there.

