CSE 5311 Section 501 Fall 1998
Homework 3
Due: February 25, 1998, in class
m is 0 15750 7875 9375 11875 15125 0 0 2625 4375 7125 10500 0 0 0 750 2500 5375 0 0 0 0 1000 3500 0 0 0 0 0 5000 0 0 0 0 0 0
Here is the table computed by LCS-LENGTH, where ``UL'' represents an arrow pointing up and left, ``UU'' represents an arrow an up arrow, and ``LL'' represents a left arrow.
(0 --) (0 --) (0 --) (0 --) (0 --) (0 --) (0 --) (0 --) (0 --) (0 --) (0 --) (0 UU) (1 UL) (1 LL) (1 UL) (1 UL) (1 LL) (1 UL) (1 UL) (1 LL) (0 --) (1 UL) (1 UU) (2 UL) (2 LL) (2 LL) (2 UL) (2 LL) (2 LL) (2 UL) (0 --) (1 UL) (1 UU) (2 UL) (2 UU) (2 UU) (3 UL) (3 LL) (3 LL) (3 UL) (0 --) (1 UU) (2 UL) (2 UU) (3 UL) (3 UL) (3 UU) (4 UL) (4 UL) (4 LL) (0 --) (1 UL) (2 UU) (3 UL) (3 UU) (3 UU) (4 UL) (4 UU) (4 UU) (5 UL) (0 --) (1 UU) (2 UL) (3 UU) (4 UL) (4 UL) (4 UU) (5 UL) (5 UL) (5 UU) (0 --) (1 UU) (2 UL) (3 UU) (4 UL) (5 UL) (5 LL) (5 UL) (6 UL) (6 LL) (0 --) (1 UU) (2 UL) (3 UU) (4 UL) (5 UL) (5 UU) (6 UL) (6 UL) (6 UU)
The longest common subsequence is (1, 0, 1, 0, 1, 1).
Yes, by Theorem 15.1, because the black-height of a node can be computed from the information at the node and its two children. Actually, the black-height can be computed from just one child's information: the black-height of a node is the black-height of a red child, or the black height of a black child plus one.
The textbook provides an optimal substructure observation: let i be the highest-numbered item in an optimal solution S for W pounds and items . Then S' = S - {i} is an optimal solution for W - pounds and items , and the value of the solution S is plus the value of the subproblem solution S'.
Define c[i,w] to be the value of the solution for items 1..i and maximum weight w. Then
+--- | 0 if i=0 or w=0 c[i,w] = | c[i-1,w] if w[i] > w | max(vi + c[i-1,w-wi], c[i-1,w] if i>0 and w>=w[i] +---
This says that the value of a solution for i items either includes item i, in which case it is plus a subproblem solution for i-1 items and the weight excluding , or does not include item i, in which case it is a subproblem solution for i-1 items and the same weight. The better of these two choices should be made.
The algorithm takes as input the maximum weight W, the number of items n, and the two sequences v = (v1, v2, ..., vn) and w = (w1, w2, ..., wn). It stores the c[i,j] values in a table c[0..n, 0..W] whose entries are computed in row-major order. At the end of the computation, c[n,W] contains the maximum value the thief can take.
Dynamic-0-1-Knapsack(v, w, n, W) for w = 0 to W c[0,w] = 0 for i = 1 to n c[i,0] = 0 for w = 1 to W if wi <= w then if vi + c[i-1,w-wi] > c[i-1,w] then c[i,w] = vi + c[i-1,w-wi] else c[i,w] = c[i-1,w] else c[i,w] = c[i-1,w]
The set of items to take can be deduced from the c table by starting at c[n,W] and tracing where the optimal values came from. If c[i,w] = c[i-1,w], item i is not part of the solution, and we continue tracing with c[i-1,w]. Otherwise, item i is part of the solution, and we continue tracing with c[i-1, w-wi]. This algorithm requires steps, time to fill in the c table ((n+1)*(W+1)) entries each requiring time to compute), and time to trace the solution since it starts in row n of the table and moves up 1 row at each step.
Note that if i=0 and j>0, then team A has already won and P(0,j) = 1. Similarly, P(i,0) = 0 for i>0. If i and j are both greater than 0, then P(i,j) must be the average of P(i-1,j) (the probability that A wins the match if it wins the next game) and P(i,j-1) (the probability that A wins the match if it loses the next game).
A dynamic programming solution fills in P[i,j] entries in diagonals beginning at i=0,j=0 and proceeding up and to the left along diagonals representing entries with a constant value of i+j.
Odds(i, j) TIMES for s = 1 to i+j n+1 P[0,s] = 1.0 n P[s,0] = 0.0 n for k = 1 to s-1 n*s P[k,s-k] = (P[k-1,s-k] + P[k,s-k-1]) / 2 n(s-1) return(P[i,j]) 1
Assuming that n = i+j, the run time of Odds is or , which is much more efficient than .
In addition, consider a greedy solution that always choose the middle key of the sequence to be the root of the tree. Show that this greedy choice always appears in an optimal solution to the tree problem, although there may exist additional optimal solutions not containing the greedy choice.
To show that the problem exhibits optimal substructure, let be the cost of a binary search tree T built for the keys i through j. Assume we have an optimal tree for the keys i through j consisting of a root key k with left subtree containing keys i to k-1 and right subtree containing keys k+1 to j. We can express the cost of the tree as
The last term derives from the fact that attaching the subtrees and to the root key k adds one to the depth of each key in the subtrees. Since there are (j - i) keys in the subtrees, the last term adds one to the depth of each of these keys.
To prove optimal substructure, we need to show that the subtrees and contained in are optimal binary search trees for the keys i to k-1 and k+1 to j, respectively. Using proof by contradiction, assume that there is a tree having a lower cost than ; or that . Then we can build a new tree by replacing by in . The cost of tree will be the same as in the equation above with replaced by . Since , . However, this contradicts the original assumption that is the optimal tree. Therefore, the subtrees must be optimal as well, and the problem exhibits optimal substructure.
To prove greedy choice, let n = j - i + 1 be the number of keys to be stored in the BST. Minimizing the height of the BST will minimize the sum of the depths of the keys in the tree. For a binary tree with n keys, the minimum height (height of the optimal tree) is . An optimal slipt of n keys into a root key and a left and a right subtree puts keys in each subtree. Thus, the optimal key to split on is key . If i=1 and j=n, then . When i+j is odd, then either key or will work, so we arbitrarily use the floor \( . Any other choice will put an excessive number of keys in one subtree with greater depth; therefore, increasing the cost of the entire tree. Thus, choosing the middle key of the sequence as the root will be in an optimal solution.
Next, we show that the greedy choice can be made first. In particular, we show that we can choose the root node before any other node. The proof is trivial, because no matter what order we assign keys to their positions, the depths will not change and the cost will remain optimal. Therefore, since we can assign any key first, we can assign the greedy choice key first.
The above proof shows that we can make the greedy choice for the root of the entire tree. The next step is to prove by induction that we can make greedy choices at any point in the tree. Since this is equivalent to proving the problem exhibits optimal substructure, which was already done in part 1a, then we have completed the proof that the greedy choice satisfies the greedy choice property.