The.Algorithm.Design.Manual.Springer-Verlag.1998
The.Algorithm.Design.Manual.Springer-Verlag.1998 The.Algorithm.Design.Manual.Springer-Verlag.1998
Lecture 18 - shortest path algorthms What if we know for all i,j? since w[k, k]=0 This gives us a recurrence, which we can evaluate in a bottom up fashion: for i=1 to n for j=1 to n for k=1 to n =Min( , ) This is an algorithm just like matrix multiplication, but it only goes from m to m+1 edges. Listen To Part 20-11 Since the shortest path between any two nodes must use at most n edges (unless we have negative cost cycles), we must repeat that procedure n times (m=1 to n) for an algorithm. We can improve this to with the observation that any path using at most 2m edges is the function of paths using at most m edges each. This is just like computing . So a logarithmic number of multiplications suffice for exponentiation. file:///E|/LEC/LECTUR16/NODE18.HTM (8 of 10) [19/1/2003 1:35:18]
Lecture 18 - shortest path algorthms Although this is slick, observe that even is slower than running Dijkstra's algorithm starting from each vertex! Listen To Part 20-12 The Floyd-Warshall Algorithm An alternate recurrence yields a more efficient dynamic programming formulation. Number the vertices from 1 to n. Let be the shortest path from i to j using only vertices from 1, 2,..., k as possible intermediate vertices. What is ? With no intermediate vertices, any path consists of at most one edge, so . In general, adding a new vertex k+1 helps iff a path goes through it, so Although this looks similar to the previous recurrence, it isn't. The following algorithm implements it: for k=1 to n for i=1 to n file:///E|/LEC/LECTUR16/NODE18.HTM (9 of 10) [19/1/2003 1:35:18] for j=1 to n
- Page 955 and 956: Lecture 12 - introduction to dynami
- Page 957 and 958: Lecture 13 - dynamic programming ap
- Page 959 and 960: Lecture 13 - dynamic programming ap
- Page 961 and 962: Lecture 13 - dynamic programming ap
- Page 963 and 964: Lecture 14 - data structures for gr
- Page 965 and 966: Lecture 14 - data structures for gr
- Page 967 and 968: Lecture 14 - data structures for gr
- Page 969 and 970: Lecture 14 - data structures for gr
- Page 971 and 972: Lecture 14 - data structures for gr
- Page 973 and 974: Lecture 14 - data structures for gr
- Page 975 and 976: Lecture 15 - DFS and BFS Next: Lect
- Page 977 and 978: Lecture 15 - DFS and BFS In a DFS o
- Page 979 and 980: Lecture 15 - DFS and BFS It could u
- Page 981 and 982: Lecture 15 - DFS and BFS Algorithm
- Page 983 and 984: Lecture 16 - applications of DFS an
- Page 985 and 986: Lecture 16 - applications of DFS an
- Page 987 and 988: Lecture 16 - applications of DFS an
- Page 989 and 990: Lecture 17 - minimum spanning trees
- Page 991 and 992: Lecture 17 - minimum spanning trees
- Page 993 and 994: Lecture 17 - minimum spanning trees
- Page 995 and 996: Lecture 17 - minimum spanning trees
- Page 997 and 998: Lecture 17 - minimum spanning trees
- Page 999 and 1000: Lecture 18 - shortest path algorthm
- Page 1001 and 1002: Lecture 18 - shortest path algorthm
- Page 1003 and 1004: Lecture 18 - shortest path algorthm
- Page 1005: Lecture 18 - shortest path algorthm
- Page 1009 and 1010: Lecture 19 - satisfiability Next: L
- Page 1011 and 1012: Lecture 19 - satisfiability Why? Th
- Page 1013 and 1014: Lecture 19 - satisfiability between
- Page 1015 and 1016: Lecture 19 - satisfiability Note th
- Page 1017 and 1018: Lecture 20 - integer programming Ex
- Page 1019 and 1020: Lecture 20 - integer programming Ne
- Page 1021 and 1022: Lecture 21 - vertex cover Proof: VC
- Page 1023 and 1024: Lecture 21 - vertex cover Question:
- Page 1025 and 1026: Lecture 21 - vertex cover seen to d
- Page 1027 and 1028: Lecture 22 - techniques for proving
- Page 1029 and 1030: Lecture 22 - techniques for proving
- Page 1031 and 1032: Lecture 22 - techniques for proving
- Page 1033 and 1034: Lecture 22 - techniques for proving
- Page 1035 and 1036: Lecture 23 - approximation algorith
- Page 1037 and 1038: Lecture 23 - approximation algorith
- Page 1039 and 1040: Lecture 23 - approximation algorith
- Page 1041 and 1042: Lecture 23 - approximation algorith
- Page 1043 and 1044: Lecture 23 - approximation algorith
- Page 1045 and 1046: About this document ... Up: No Titl
- Page 1047 and 1048: 1.1.6 Kd-Trees ● Point Location
- Page 1049 and 1050: 1.1.3 Suffix Trees and Arrays Send
- Page 1051 and 1052: 1.5.1 Clique ● Vertex Cover Go to
- Page 1053 and 1054: 1.6.2 Convex Hull Related Problems
- Page 1055 and 1056: Visual Links Index file:///E|/WEBSI
Lecture 18 - shortest path algorthms<br />
What if we know for all i,j?<br />
since w[k, k]=0<br />
This gives us a recurrence, which we can evaluate in a bottom up fashion:<br />
for i=1 to n<br />
for j=1 to n<br />
for k=1 to n<br />
=Min( , )<br />
This is an algorithm just like matrix multiplication, but it only goes from m to m+1 edges.<br />
Listen To Part 20-11<br />
Since the shortest path between any two nodes must use at most n edges (unless we have negative cost<br />
cycles), we must repeat that procedure n times (m=1 to n) for an algorithm.<br />
We can improve this to with the observation that any path using at most 2m edges is the<br />
function of paths using at most m edges each. This is just like computing . So a<br />
logarithmic number of multiplications suffice for exponentiation.<br />
file:///E|/LEC/LECTUR16/NODE18.HTM (8 of 10) [19/1/2003 1:35:18]