The.Algorithm.Design.Manual.Springer-Verlag.1998

The.Algorithm.Design.Manual.Springer-Verlag.1998 The.Algorithm.Design.Manual.Springer-Verlag.1998

18.04.2013 Views

Randomization for i = low+1 to high swap(A[low],A[leftwall]) if (A[i] < pivot) then leftwall = leftwall+1 swap(A[i],A[leftwall]) Mergesort ran in time because we split the keys into two equal halves, sorted them recursively, and then merged the halves in linear time. Thus whenever our pivot element is near the center of the sorted array (i.e. the pivot is close to the median element), we get a good split and realize the same performance as mergesort. Such good pivot elements will often be the case. After all, half the elements lie closer to the middle than one of the ends. On average, Quicksort runs in time. If we are extremely unlucky and our randomly selected elements always are among the largest or smallest element in the array, Quicksort turn into selection sort and runs in . However, the odds against this are vanishingly small. Randomization is a powerful, general tool to improve algorithms with bad worst-case but good averagecase complexity. The worst case examples still exist, but they depend only upon how unlucky we are, not on the order that the input data is presented to us. For randomly chosen pivots, we can say that ``Randomized quicksort runs in time on any input, with high probability.'' If instead, we used some deterministic rule for selecting pivots (like always selecting A[(low+high)/2] as pivot), we can make no claim stronger than ``Quicksort runs in time if you give me random input data, with high probability.'' Randomization can also be used to drive search techniques such as simulated annealing, which are discussed in Section . file:///E|/BOOK/BOOK/NODE35.HTM (2 of 3) [19/1/2003 1:28:31]

Randomization Next: Bucketing Techniques Up: Approaches to Sorting Previous: Divide and Conquer Algorithms Mon Jun 2 23:33:50 EDT 1997 file:///E|/BOOK/BOOK/NODE35.HTM (3 of 3) [19/1/2003 1:28:31]

Randomization<br />

for i = low+1 to high<br />

swap(A[low],A[leftwall])<br />

if (A[i] < pivot) then<br />

leftwall = leftwall+1<br />

swap(A[i],A[leftwall])<br />

Mergesort ran in time because we split the keys into two equal halves, sorted them recursively, and<br />

then merged the halves in linear time. Thus whenever our pivot element is near the center of the sorted<br />

array (i.e. the pivot is close to the median element), we get a good split and realize the same performance as<br />

mergesort. Such good pivot elements will often be the case. After all, half the elements lie closer to the<br />

middle than one of the ends. On average, Quicksort runs in time. If we are extremely unlucky and<br />

our randomly selected elements always are among the largest or smallest element in the array, Quicksort<br />

turn into selection sort and runs in . However, the odds against this are vanishingly small.<br />

Randomization is a powerful, general tool to improve algorithms with bad worst-case but good averagecase<br />

complexity. <strong>The</strong> worst case examples still exist, but they depend only upon how unlucky we are, not<br />

on the order that the input data is presented to us. For randomly chosen pivots, we can say that<br />

``Randomized quicksort runs in time on any input, with high probability.''<br />

If instead, we used some deterministic rule for selecting pivots (like always selecting A[(low+high)/2] as<br />

pivot), we can make no claim stronger than<br />

``Quicksort runs in time if you give me random input data, with high probability.''<br />

Randomization can also be used to drive search techniques such as simulated annealing, which are<br />

discussed in Section .<br />

file:///E|/BOOK/BOOK/NODE35.HTM (2 of 3) [19/1/2003 1:28:31]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!