The.Algorithm.Design.Manual.Springer-Verlag.1998
The.Algorithm.Design.Manual.Springer-Verlag.1998 The.Algorithm.Design.Manual.Springer-Verlag.1998
Lecture 13 - dynamic programming applications Next: Lecture 14 - data Up: No Title Previous: Lecture 12 - introduction Lecture 13 - dynamic programming applications Listen To Part 13-1 16.3-5 Give an algorithm to find the longest montonically increasing sequence in a sequence of n numbers. Build an example first: (5, 2, 8, 7, 1, 6, 4) Ask yourself what would you like to know about the first n-1 elements to tell you the answer for the entire sequence? 1. The length of the longest sequence in . (seems obvious) 2. The length of the longest sequence will extend! (not as obvious - this is the idea!) Let be the length of the longest sequence ending with the ith character: How do we compute i? sequence 5 2 8 7 3 1 6 4 1 1 2 2 2 1 3 3 To find the longest sequence - we know it ends somewhere, so Length = Listen To Part 14-5 file:///E|/LEC/LECTUR16/NODE13.HTM (1 of 7) [19/1/2003 1:35:01] The Principle of Optimality
Lecture 13 - dynamic programming applications To use dynamic programming, the problem must observe the principle of optimality, that whatever the initial state is, remaining decisions must be optimal with regard the state following from the first decision. Combinatorial problems may have this property but may use too much memory/time to be efficient. Example: The Traveling Salesman Problem Let be the cost of the optimal tour for i to 1 that goes thru each of the other cities once Here there can be any subset of instead of any subinterval - hence exponential. Still, with other ideas (some type of pruning or best-first search) it can be effective for combinatorial search. Listen To Part 14-6 When can you use Dynamic Programming? Dynamic programming computes recurrences efficiently by storing partial results. Thus dynamic programming can only be efficient when there are not too many partial results to compute! There are n! permutations of an n-element set - we cannot use dynamic programming to store the best solution for each subpermutation. There are subsets of an n-element set - we cannot use dynamic programming to store the best solution for each. However, there are only n(n-1)/2 continguous substrings of a string, each described by a starting and ending point, so we can use it for string problems. There are only n(n-1)/2 possible subtrees of a binary search tree, each described by a maximum and minimum key, so we can use it for optimizing binary search trees. Dynamic programming works best on objects which are linearly ordered and cannot be rearranged - characters in a string, matrices in a chain, points around the boundary of a polygon, the left-to-right order of leaves in a search tree. file:///E|/LEC/LECTUR16/NODE13.HTM (2 of 7) [19/1/2003 1:35:01]
- Page 905 and 906: Lecture 7 - elementary data structu
- Page 907 and 908: Lecture 7 - elementary data structu
- Page 909 and 910: Lecture 7 - elementary data structu
- Page 911 and 912: Lecture 7 - elementary data structu
- Page 913 and 914: Lecture 7 - elementary data structu
- Page 915 and 916: Lecture 7 - elementary data structu
- Page 917 and 918: Lecture 8 - binary trees - Joyce Ki
- Page 919 and 920: Lecture 8 - binary trees TREE-MINIM
- Page 921 and 922: Lecture 8 - binary trees Inorder-Tr
- Page 923 and 924: Lecture 8 - binary trees Listen To
- Page 925 and 926: Lecture 8 - binary trees insert(b)
- Page 927 and 928: Lecture 8 - binary trees Not (1) -
- Page 929 and 930: Lecture 9 - catch up Next: Lecture
- Page 931 and 932: Lecture 10 - tree restructuring 1.
- Page 933 and 934: Lecture 10 - tree restructuring Lis
- Page 935 and 936: Lecture 10 - tree restructuring Not
- Page 937 and 938: Lecture 10 - tree restructuring Cas
- Page 939 and 940: Lecture 11 - backtracking Next: Lec
- Page 941 and 942: Lecture 11 - backtracking Only 63 s
- Page 943 and 944: Lecture 11 - backtracking Recursion
- Page 945 and 946: Lecture 11 - backtracking Specifica
- Page 947 and 948: Lecture 12 - introduction to dynami
- Page 949 and 950: Lecture 12 - introduction to dynami
- Page 951 and 952: Lecture 12 - introduction to dynami
- Page 953 and 954: Lecture 12 - introduction to dynami
- Page 955: Lecture 12 - introduction to dynami
- Page 959 and 960: Lecture 13 - dynamic programming ap
- Page 961 and 962: Lecture 13 - dynamic programming ap
- Page 963 and 964: Lecture 14 - data structures for gr
- Page 965 and 966: Lecture 14 - data structures for gr
- Page 967 and 968: Lecture 14 - data structures for gr
- Page 969 and 970: Lecture 14 - data structures for gr
- Page 971 and 972: Lecture 14 - data structures for gr
- Page 973 and 974: Lecture 14 - data structures for gr
- Page 975 and 976: Lecture 15 - DFS and BFS Next: Lect
- Page 977 and 978: Lecture 15 - DFS and BFS In a DFS o
- Page 979 and 980: Lecture 15 - DFS and BFS It could u
- Page 981 and 982: Lecture 15 - DFS and BFS Algorithm
- Page 983 and 984: Lecture 16 - applications of DFS an
- Page 985 and 986: Lecture 16 - applications of DFS an
- Page 987 and 988: Lecture 16 - applications of DFS an
- Page 989 and 990: Lecture 17 - minimum spanning trees
- Page 991 and 992: Lecture 17 - minimum spanning trees
- Page 993 and 994: Lecture 17 - minimum spanning trees
- Page 995 and 996: Lecture 17 - minimum spanning trees
- Page 997 and 998: Lecture 17 - minimum spanning trees
- Page 999 and 1000: Lecture 18 - shortest path algorthm
- Page 1001 and 1002: Lecture 18 - shortest path algorthm
- Page 1003 and 1004: Lecture 18 - shortest path algorthm
- Page 1005 and 1006: Lecture 18 - shortest path algorthm
Lecture 13 - dynamic programming applications<br />
To use dynamic programming, the problem must observe the principle of optimality, that whatever the<br />
initial state is, remaining decisions must be optimal with regard the state following from the first<br />
decision.<br />
Combinatorial problems may have this property but may use too much memory/time to be efficient.<br />
Example: <strong>The</strong> Traveling Salesman Problem<br />
Let be the cost of the optimal tour for i to 1 that goes thru each of the other cities once<br />
Here there can be any subset of instead of any subinterval - hence exponential.<br />
Still, with other ideas (some type of pruning or best-first search) it can be effective for combinatorial<br />
search.<br />
Listen To Part 14-6<br />
When can you use Dynamic Programming?<br />
Dynamic programming computes recurrences efficiently by storing partial results. Thus dynamic<br />
programming can only be efficient when there are not too many partial results to compute!<br />
<strong>The</strong>re are n! permutations of an n-element set - we cannot use dynamic programming to store the best<br />
solution for each subpermutation. <strong>The</strong>re are subsets of an n-element set - we cannot use dynamic<br />
programming to store the best solution for each.<br />
However, there are only n(n-1)/2 continguous substrings of a string, each described by a starting and<br />
ending point, so we can use it for string problems.<br />
<strong>The</strong>re are only n(n-1)/2 possible subtrees of a binary search tree, each described by a maximum and<br />
minimum key, so we can use it for optimizing binary search trees.<br />
Dynamic programming works best on objects which are linearly ordered and cannot be rearranged -<br />
characters in a string, matrices in a chain, points around the boundary of a polygon, the left-to-right<br />
order of leaves in a search tree.<br />
file:///E|/LEC/LECTUR16/NODE13.HTM (2 of 7) [19/1/2003 1:35:01]