10.12.2012 Views

Prime Numbers

Prime Numbers

Prime Numbers

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

478 Chapter 9 FAST ALGORITHMS FOR LARGE-INTEGER ARITHMETIC<br />

that some of them are also suited for the goal of convolution, so we name a<br />

few: The Walsh–Hadamard transform, for which one needs no multiplication,<br />

only addition; the discrete cosine transform (DCT), which is a real-signal,<br />

real-multiplication analogue to the DFT; various wavelet transforms, which<br />

sometimes admit of very fast (O(N) rather than O(N ln N)) algorithms; realvalued<br />

FFT, which uses either cos or sin in real-only summands; the realsignal<br />

Hartley transform, and so on. Various of these options are discussed in<br />

[Crandall 1994b, 1996a].<br />

Just to clear the air, we hereby make explicit the almost trivial difference<br />

between the DFT and the celebrated fast Fourier transform (FFT). The<br />

FFT is an operation belonging to the general class of divide-and-conquer<br />

algorithms, and which calculates the DFT of Definition 9.5.3. The FFT will<br />

typically appear in our algorithm layouts in the form X = FFT(x), where<br />

it is understood that the DFT is being calculated. Similarly, an operation<br />

FFT −1 (x) returns the inverse DFT. We make the distinction explicit because<br />

“FFT” is in some sense a misnomer: The DFT is a certain sum—an algebraic<br />

quantity—yet the FFT is an algorithm. Here is a heuristic analogy to the<br />

distinction: In this book, the equivalence class x (mod N) are theoretical<br />

entities, whereas the operation of reducing x modulo p we have chosen to<br />

write a little differently, as x mod p. Bythesametoken,withinanalgorithm<br />

the notation X = FFT(x) means that we are performing an FFT operation<br />

on the signal X; and this operation gives, of course, the result DFT(x). (Yet<br />

another reason to make the almost trivial distinction is that we have known<br />

students who incorrectly infer that an FFT is some kind of “approximation”<br />

to the DFT, when in fact, the FFT is sometimes more accurate then a literal<br />

DFT summation, in the sense of roundoff error, mainly because of reduced<br />

operation count for the FFT.)<br />

The basic FFT algorithm notion has been traced all the way back to<br />

some observations of Gauss, yet some authors ascribe the birth of the modern<br />

theory to the Danielson–Lanczos identity, applicable when the signal length<br />

D is even:<br />

D−1 <br />

DFT(x) = xjg<br />

j=0<br />

−jk D/2−1 2<br />

= x2j g<br />

j=0<br />

D/2−1<br />

−jk −k<br />

2<br />

+ g x2j+1 g<br />

j=0<br />

−jk .<br />

(9.22)<br />

A beautiful identity indeed: A DFT sum for signal length D is split into two<br />

sums, each of length D/2. In this way the Danielson–Lanczos identity ignites<br />

a recursive method for calculating the transform. Note the so-called twiddle<br />

factors g−k , which figure naturally into the following recursive form of FFT.<br />

In this and subsequent algorithm layouts we denote by len(x) the length of<br />

asignalx. In addition, when we perform element concatenations of the form<br />

(aj)j∈J we mean the result to be a natural, left-to-right, element concatenation<br />

as the increasing index j runs through a given set J. Similarly, U ∪ V is a<br />

signal having the elements of V appended to the right of the elements of U.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!