The.Algorithm.Design.Manual.Springer-Verlag.1998
The.Algorithm.Design.Manual.Springer-Verlag.1998 The.Algorithm.Design.Manual.Springer-Verlag.1998
Lecture 2 - asymptotic notation Next: Lecture 3 - recurrence Up: No Title Previous: Lecture 1 - analyzing Lecture 2 - asymptotic notation Listen To Part 2-1 Problem 1.2-6: How can we modify almost any algorithm to have a good best-case running time? To improve the best case, all we have to do it to be able to solve one instance of each size efficiently. We could modify our algorithm to first test whether the input is the special instance we know how to solve, and then output the canned answer. For sorting, we can check if the values are already ordered, and if so output them. For the traveling salesman, we can check if the points lie on a line, and if so output the points in that order. The supercomputer people pull this trick on the linpack benchmarks! Because it is so easy to cheat with the best case running time, we usually don't rely too much about it. Because it is usually very hard to compute the average running time, since we must somehow average over all the instances, we usually strive to analyze the worst case running time. The worst case is usually fairly easy to analyze and often close to the average or real running time. Listen To Part 2-2 Exact Analysis is Hard! We have agreed that the best, worst, and average case complexity of an algorithm is a numerical function of the size of the instances. file:///E|/LEC/LECTUR16/NODE2.HTM (1 of 10) [19/1/2003 1:34:21]
Lecture 2 - asymptotic notation However, it is difficult to work with exactly because it is typically very complicated! Thus it is usually cleaner and easier to talk about upper and lower bounds of the function. This is where the dreaded big O notation comes in! Since running our algorithm on a machine which is twice as fast will effect the running times by a multiplicative constant of 2 - we are going to have to ignore constant factors anyway. Listen To Part 2-3 Names of Bounding Functions Now that we have clearly defined the complexity functions we are talking about, we can talk about upper and lower bounds on it: ● g(n) = O(f(n)) means is an upper bound on g(n). ● means is a lower bound on g(n). ● means is an upper bound on g(n) and is a lower bound on g(n). Got it? C, , and are all constants independent of n. All of these definitions imply a constant beyond which they are satisfied. We do not care about small values of n. Listen To Part 2-4 file:///E|/LEC/LECTUR16/NODE2.HTM (2 of 10) [19/1/2003 1:34:21] O, , and
- Page 801 and 802: Index (complete) sine functions sin
- Page 803 and 804: Index (complete) spring embedding h
- Page 805 and 806: Index (complete) Symbol Technologie
- Page 807 and 808: Index (complete) trial division Tri
- Page 809 and 810: Index (complete) VLSI circuit layou
- Page 811 and 812: 1.4.4 Shortest Path 1.4.4 Shortest
- Page 813 and 814: 1.2.5 Constrained and Unconstrained
- Page 815 and 816: 1.6.4 Voronoi Diagrams 1.6.4 Vorono
- Page 817 and 818: 1.4.7 Eulerian Cycle / Chinese Post
- Page 819 and 820: Online Bibliographies Online Biblio
- Page 821 and 822: About the Book -- The Algorithm Des
- Page 823 and 824: Copyright and Disclaimers Copyright
- Page 825 and 826: CD-ROM Installation Installation an
- Page 827 and 828: CD-ROM Installation install a sound
- Page 829 and 830: Thanks! Frank Ruscica file:///E|/WE
- Page 831 and 832: Thanks! Zhong Li Thanks also to Fil
- Page 833 and 834: CD-ROM Installation To use the CD-R
- Page 835 and 836: Binary Search in Action Binary Sear
- Page 837 and 838: Binary Search in Action file:///E|/
- Page 839 and 840: Binary Search in Action file:///E|/
- Page 841 and 842: Postscript version of the lecture n
- Page 843 and 844: Lecture 1 - analyzing algorithms Ne
- Page 845 and 846: Lecture 1 - analyzing algorithms Yo
- Page 847 and 848: Lecture 1 - analyzing algorithms Th
- Page 849 and 850: Lecture 1 - analyzing algorithms Th
- Page 851: Lecture 1 - analyzing algorithms is
- Page 855 and 856: Lecture 2 - asymptotic notation Not
- Page 857 and 858: Lecture 2 - asymptotic notation alg
- Page 859 and 860: Lecture 2 - asymptotic notation Sup
- Page 861 and 862: Lecture 2 - asymptotic notation Nex
- Page 863 and 864: Lecture 3 - recurrence relations Is
- Page 865 and 866: Lecture 3 - recurrence relations
- Page 867 and 868: Lecture 3 - recurrence relations Li
- Page 869 and 870: Lecture 3 - recurrence relations Su
- Page 871 and 872: Lecture 3 - recurrence relations A
- Page 873 and 874: Lecture 4 - heapsort Note iteration
- Page 875 and 876: Lecture 4 - heapsort The convex hul
- Page 877 and 878: Lecture 4 - heapsort 1. All leaves
- Page 879 and 880: Lecture 4 - heapsort left = 2i righ
- Page 881 and 882: Lecture 4 - heapsort Since this sum
- Page 883 and 884: Lecture 4 - heapsort Selection sort
- Page 885 and 886: Lecture 4 - heapsort Greedy Algorit
- Page 887 and 888: Lecture 5 - quicksort Partitioning
- Page 889 and 890: Lecture 5 - quicksort Listen To Par
- Page 891 and 892: Lecture 5 - quicksort Listen To Par
- Page 893 and 894: Lecture 5 - quicksort The worst cas
- Page 895 and 896: Lecture 5 - quicksort Mon Jun 2 09:
- Page 897 and 898: Lecture 6 - linear sorting Since 2i
- Page 899 and 900: Lecture 6 - linear sorting With uni
- Page 901 and 902: Lecture 6 - linear sorting Listen T
Lecture 2 - asymptotic notation<br />
Next: Lecture 3 - recurrence Up: No Title Previous: Lecture 1 - analyzing<br />
Lecture 2 - asymptotic notation<br />
Listen To Part 2-1<br />
Problem 1.2-6: How can we modify almost any algorithm to have a good best-case running time?<br />
To improve the best case, all we have to do it to be able to solve one instance of each size efficiently. We<br />
could modify our algorithm to first test whether the input is the special instance we know how to solve,<br />
and then output the canned answer.<br />
For sorting, we can check if the values are already ordered, and if so output them. For the traveling<br />
salesman, we can check if the points lie on a line, and if so output the points in that order.<br />
<strong>The</strong> supercomputer people pull this trick on the linpack benchmarks!<br />
Because it is so easy to cheat with the best case running time, we usually don't rely too much about it.<br />
Because it is usually very hard to compute the average running time, since we must somehow average<br />
over all the instances, we usually strive to analyze the worst case running time.<br />
<strong>The</strong> worst case is usually fairly easy to analyze and often close to the average or real running time.<br />
Listen To Part 2-2<br />
Exact Analysis is Hard!<br />
We have agreed that the best, worst, and average case complexity of an algorithm is a numerical function<br />
of the size of the instances.<br />
file:///E|/LEC/LECTUR16/NODE2.HTM (1 of 10) [19/1/2003 1:34:21]