You've already seen problems that can be reduced to subproblems with the same structure -- recursion comes to mind. A classical sequence whose value is expressed recursively is the Fibonacci sequence:
int fib(int n)
{
if (n < 2) {
return n;
} else {
return fib(n-1) + fib(n-2);
}
}
However, for fairly modest A more reasonable way to calculate fib(n) is from the bottom up:
int fib(int n)
{
int *f, answer;
if (n < 2)
return n;
f = new int[n+1];
f[0] = 0;
f[1] = 1;
for (int i = 2; i <= n; i++)
f[i] = f[i-1] + f[i-2];
answer = f[n];
delete f;
return answer;
}
Even given the new/free operations to allocate and
free memory, this is a much better algorithm, being Another algorithm that fits the same mold is the Floyd-Warshall all-pairs shortest distance. At the heart of that algorithm was the following recurrence:
The Dynamic Programming approach would be to start at the bottom:
is defined to be the weight of edge
, and we can
easily build up a three-dimensional array starting from these values,
and using the recurrence. The solution presented in our course
readings uses a two-dimensional array (saving space), although the
algorithm is still
.
What are the principles that tie these approaches together under the heading Dynamic Programming? Given a problem that you want to find an optimal solution for: