NIST

dynamic programming

(algorithmic technique)

Definition: Solve an optimization problem by caching subproblem solutions (memoization) rather than recomputing them.

Aggregate parent (I am a part of or used in ...)
Smith-Waterman algorithm Solves these problems: matrix-chain multiplication problem, longest common substring, longest common subsequence.

See also greedy algorithm, principle of optimality.

Note: From Algorithms and Theory of Computation Handbook, page 1-26, Copyright © 1999 by CRC Press LLC. Appearing in the Dictionary of Computer Science, Engineering and Technology, Copyright © 2000 CRC Press LLC.

Author: CRC-A

Implementation

Mark Nelson's tutorial to using C++ Hash Table Memoization: [for] Simplifying Dynamic Programming (C++). Oleg Kiselyov's program to optimally lay out a page (C++) using dynamic programming.
Go to the Dictionary of Algorithms and Data Structures home page.

If you have suggestions, corrections, or comments, please get in touch with Paul Black.

Entry modified 4 October 2021.
HTML page formatted Mon Oct 4 14:22:58 2021.

Cite this as:
Algorithms and Theory of Computation Handbook, CRC Press LLC, 1999, "dynamic programming", in Dictionary of Algorithms and Data Structures [online], Paul E. Black, ed. 4 October 2021. (accessed TODAY) Available from: https://www.nist.gov/dads/HTML/dynamicprog.html