https://drive.google.com/file/d/1HuLSp-sVpmhvp-UQwOJXCtoHHrjwF2Jd/view?usp=sharing .
This document details the concepts and applications of Dynamic Programming (DP), specifically within the context of an algorithm analysis course.
General Method of Dynamic Programming
Dynamic Programming is a problem-solving approach where a complex problem is broken down into simpler subproblems.- Core Principle: It avoids recomputing the same subproblem twice by storing solutions in a table and fetching them when needed (memorization).
- Principle of Optimality: An optimal sequence of decisions is obtained by ensuring that the results from initial decisions also form an optimal sequence, regardless of the initial state.
- Components: DP is essentially the combination of recursion and memorization.
- Key Characteristics: Problems suitable for DP typically have simple, overlapping subproblems and an optimal structure where the overall optimal solution contains optimal sub-solutions.
- Approaches:
- Top-down (Memorization): Breaks the problem into subproblems and stores results for future use.
- Bottom-up (Table Method): Computes smaller inputs first and moves backward to form larger solutions, avoiding recursion.
Key Applications
The document provides detailed explanations and examples for several classic optimization problems solved using DP:1. Optimal Binary Search Tree (OBST)
- Goal: To organize a fixed set of identifiers into a binary search tree (BST) that minimizes the total cost of successful and unsuccessful searches.
- Mechanism: It uses probabilities of searching for each identifier ($P$) and probabilities of searches falling between identifiers ($q$).
- Formula: The minimum cost is determined by the formula $c(i,j) = \min_{i<k\le j} {c(i,k-1) + c(k,j)} + w(i,j)$.
- Complexity: The total time complexity for this algorithm is $O(n^3)$.
2. 0/1 Knapsack Problem
- Constraint: Unlike the greedy method, DP does not allow placing partial objects in the knapsack; an item is either included ($1$) or not ($0$).
- Method: It uses ordered sets of $(Profit, Weight)$ pairs to represent possible states resulting from decision sequences.
- Dominance Rule: To remain efficient, the 'purging rule' or 'dominance rule' is used to discard pairs that are clearly inferior (e.g., if one pair has more weight but less profit than another).
- Complexity: The worst-case time complexity is $O(2^n)$, while the best-case complexity is $O(NW)$, where $N$ is the number of items and $W$ is the knapsack capacity.
3. All Pairs Shortest Paths
- Goal: To determine the shortest path between all pairs of vertices in a directed graph.
- Optimality: It assumes that if node $k$ is on the shortest path from $i$ to $j$, then the sub-paths $i \to k$ and $k \to j$ must also be shortest paths.
- Recurrence: The shortest path is updated iteratively using $A^k(i,j) = \min {A^{k-1}(i,j), A^{k-1}(i,k) + A^{k-1}(k,j)}$.
- Complexity: This approach provides an $O(n^3)$ solution.
4. Travelling Salesperson Problem (TSP)
- Goal: A salesman must visit every city exactly once and return to the starting city with the minimum total cost.
- Formula: The minimum cost $g(i,S)$ is found by taking the minimum of the cost to an intermediate city $j$ plus the shortest path from $j$ through the remaining cities.
- Complexity: The time complexity for TSP using DP is $O(n^2 2^n)$.
5. Reliability Design
- Goal: To maximize the reliability of a system composed of several devices connected in series by using duplicate devices in parallel for each stage.
- Constraint: The maximization must be achieved under a fixed cost constraint.
- Reliability Calculation: Stage reliability is given by $\phi_i(m_i) = 1 - (1 - r_i)^{m_i}$, where $m_i$ is the number of copies.
- Approach: Similar to the knapsack problem, it uses sets of (reliability, cost) tuples and applies dominance rules to find the optimal number of copies for each device stage.
Comments