Introduction to Dynamic Programming
Dynamic Programming (DP) is an optimization technique that solves complex problems by breaking them down into simpler subproblems and storing their solutions to avoid redundant calculations.
The Core Idea
Instead of recalculating the same thing many times, we calculate it once and remember the answer.
Without DP (slow):
fib(5) = fib(4) + fib(3)
= (fib(3) + fib(2)) + (fib(2) + fib(1))
= ((fib(2) + fib(1)) + (fib(1) + fib(0))) + ...
Many repeated calculations!
With DP (fast):
fib(0) = 0 [remember]
fib(1) = 1 [remember]
fib(2) = fib(1) + fib(0) = 1 [remember]
fib(3) = fib(2) + fib(1) = 2 [remember]
fib(4) = fib(3) + fib(2) = 3 [remember]
fib(5) = fib(4) + fib(3) = 5 [use remembered values]
When to Use Dynamic Programming
Look for these signals:
1. Optimal Substructure
The optimal solution contains optimal solutions to subproblems.
Example: Shortest path from A to C through B = shortest(A→B) + shortest(B→C)
2. Overlapping Subproblems
The same subproblems are solved multiple times.
Example: Calculating fib(5) requires fib(3) multiple times
3. Keywords in Problem
- "Maximum/minimum"
- "Count number of ways"
- "Is it possible to..."
- "Longest/shortest"
- "Optimize"
The Two Approaches
Top-Down (Memoization)
Start from the answer, recursively solve subproblems, cache results.
def fib(n, memo={}):
# Base cases
if n <= 1:
return n
# Check cache
if n in memo:
return memo[n]
# Calculate and cache
memo[n] = fib(n-1, memo) + fib(n-2, memo)
return memo[n]
print(fib(50)) # Fast!
Pros:
- Intuitive (follows recursive thinking)
- Only computes needed subproblems
- Easy to write
Cons:
- Recursion overhead
- Stack overflow for large n
- Extra space for call stack
Bottom-Up (Tabulation)
Start from base cases, build up to answer iteratively.
def fib(n):
if n <= 1:
return n
# Build table
dp = [0] * (n + 1)
dp[0] = 0
dp[1] = 1
for i in range(2, n + 1):
dp[i] = dp[i-1] + dp[i-2]
return dp[n]
print(fib(50)) # Fast!
Pros:
- No recursion overhead
- True bottom-up order
- Often more space-efficient
Cons:
- Less intuitive at first
- May compute unnecessary subproblems
- Harder to write initially
The DP Problem-Solving Process
Step 1: Identify if it's a DP Problem
Ask yourself:
- Can I break this into smaller similar problems?
- Am I solving the same subproblem multiple times?
- Do I need an optimal solution?
Step 2: Define the State
What information do I need to know at each step?
Examples:
dp[i]= answer for first i elementsdp[i][j]= answer for substring from i to jdp[i][w]= max value with i items and weight w
Step 3: Find the Recurrence Relation
How does the current state relate to previous states?
Examples:
dp[i] = dp[i-1] + dp[i-2](Fibonacci)dp[i] = max(dp[i-1], dp[i-2] + nums[i])(House Robber)dp[i][j] = min(dp[i-1][j], dp[i][j-1]) + grid[i][j](Min Path Sum)
Step 4: Identify Base Cases
What are the simplest subproblems I can solve directly?
Examples:
dp[0] = 0,dp[1] = 1(Fibonacci)dp[0] = nums[0](first house)dp[0][0] = grid[0][0](start position)
Step 5: Determine Computation Order
In what order should I fill the DP table?
- 1D: Usually left to right
- 2D: Usually top to bottom, left to right
- Sometimes: diagonal, or special order
Step 6: Implement
Choose top-down or bottom-up and code it!
Simple Example: Climbing Stairs
Problem: You're climbing stairs. You can take 1 or 2 steps at a time. How many distinct ways can you climb to the top?
Step 1: Is it DP?
- ✅ Optimal substructure: ways(n) depends on ways(n-1) and ways(n-2)
- ✅ Overlapping subproblems: ways(3) needed for both ways(4) and ways(5)
- ✅ Counting number of ways → DP keyword
Step 2: Define State
dp[i] = number of ways to reach step i
Step 3: Recurrence Relation
To reach step i, I can come from:
- step i-1 (take 1 step)
- step i-2 (take 2 steps)
So: dp[i] = dp[i-1] + dp[i-2]
Step 4: Base Cases
dp[0] = 1 (one way: don't climb)
dp[1] = 1 (one way: take 1 step)
Step 5: Order
Left to right (i from 2 to n)
Step 6: Implement
Top-Down:
def climbStairs(n, memo={}):
if n <= 1:
return 1
if n in memo:
return memo[n]
memo[n] = climbStairs(n-1, memo) + climbStairs(n-2, memo)
return memo[n]
Bottom-Up:
def climbStairs(n):
if n <= 1:
return 1
dp = [0] * (n + 1)
dp[0] = 1
dp[1] = 1
for i in range(2, n + 1):
dp[i] = dp[i-1] + dp[i-2]
return dp[n]
Space-Optimized:
def climbStairs(n):
if n <= 1:
return 1
prev2, prev1 = 1, 1
for i in range(2, n + 1):
current = prev1 + prev2
prev2, prev1 = prev1, current
return prev1
Time vs. Space Complexity
Without DP (Naive Recursion)
def fib(n):
if n <= 1:
return n
return fib(n-1) + fib(n-2)
# Time: O(2^n) - exponential!
# Space: O(n) - recursion depth
With Memoization
def fib(n, memo={}):
if n <= 1:
return n
if n in memo:
return memo[n]
memo[n] = fib(n-1, memo) + fib(n-2, memo)
return memo[n]
# Time: O(n) - compute each value once
# Space: O(n) - memo + recursion stack
With Tabulation
def fib(n):
if n <= 1:
return n
dp = [0] * (n + 1)
dp[0], dp[1] = 0, 1
for i in range(2, n + 1):
dp[i] = dp[i-1] + dp[i-2]
return dp[n]
# Time: O(n)
# Space: O(n) - dp array
Space-Optimized
def fib(n):
if n <= 1:
return n
prev2, prev1 = 0, 1
for i in range(2, n + 1):
current = prev1 + prev2
prev2, prev1 = prev1, current
return prev1
# Time: O(n)
# Space: O(1) - only three variables!
Common Mistakes
Mistake 1: Forgetting Base Cases
# Wrong - infinite recursion
def climb(n, memo):
if n in memo:
return memo[n]
memo[n] = climb(n-1, memo) + climb(n-2, memo)
return memo[n]
# Right - base cases prevent infinite recursion
def climb(n, memo):
if n <= 1: # Base case!
return 1
if n in memo:
return memo[n]
memo[n] = climb(n-1, memo) + climb(n-2, memo)
return memo[n]
Mistake 2: Wrong State Definition
# Wrong - state doesn't capture enough info
dp[i] = "something at position i"
# But we also need to track if we skipped previous!
# Right - state captures all necessary info
dp[i][skipped] = answer when at position i with skip status
Mistake 3: Incorrect Recurrence
# Wrong - doesn't consider all possibilities
dp[i] = dp[i-1] + nums[i] # What about skipping i?
# Right - considers all choices
dp[i] = max(dp[i-1], dp[i-2] + nums[i])
DP vs. Other Approaches
DP vs. Greedy
- Greedy: Make locally optimal choice at each step
- DP: Consider all possibilities, find global optimum
- Use DP when: Greedy doesn't guarantee optimal solution
DP vs. Divide and Conquer
- Divide and Conquer: Split into independent subproblems
- DP: Subproblems overlap
- Use DP when: Same subproblems computed multiple times
DP vs. Backtracking
- Backtracking: Find all solutions
- DP: Find one optimal solution
- Use DP when: You don't need all solutions, just the best
Quick Recognition Guide
See these? Think DP:
- Fibonacci, climbing stairs → 1D DP
- Grid paths, edit distance → 2D DP
- Knapsack, coin change → 2D DP (items x capacity)
- Palindrome substrings → 2D DP (substring i to j)
- Buy/sell stock → State machine DP
Key Takeaway
Dynamic Programming is about avoiding redundant work. Identify that subproblems overlap, define what state you need to track, find how states relate to each other, and build your solution bottom-up or top-down with memoization. The hardest part is recognizing DP problems and defining the right state—everything else follows naturally.