Prime Numbers

Prime Numbers Prime Numbers

thales.doa.fmph.uniba.sk
from thales.doa.fmph.uniba.sk More from this publisher
10.12.2012 Views

9.5 Large-integer multiplication 507 know that the complexity must be O(D ln D) operations, and as we have said, these are usually, in practice, floatingpoint operations (both adds and multiplies are bounded in this fashion). Now the bit complexity is not O((n/b)ln(n/b))—that is, we cannot just substitute D = n/b in the operation-complexity estimate—because floatingpoint arithmetic on larger digits must, of course, be more expensive. When these notions are properly analyzed we obtain the Strassen bound of O(n(C ln n)(C ln ln n)(C ln ln ln n) ···) bit operations for the basic FFT multiply, where C is a constant and the ln ln ··· chain is understood to terminate when it falls below 1. Before we move ahead with other estimates, we must point out that even though this bit complexity is not asymptotically optimal, some of the greatest achievements in the general domain of large-integer arithmetic have been achieved with this basic Schönhage–Strassen FFT, and yes, using floating-point operations. Now, the Schönhage Algorithm 9.5.23 gets neatly around the problem that for a fixed number of signal digits D, the digit operations (small multiplications) must get more complex for larger operands. Analysis of the recursion within the algorithm starts with the observation that at top recursion level, there are two DFTs (but very simple ones—only shifting and adding occur) and the dyadic multiply. Detailed analysis yields the best-known complexity bound of O(n(ln n)(ln ln n)) bit operations, although the Nussbaumer method’s complexity, which we discuss next, is asymptotically equivalent. Next, one can see that (as seen in Exercise 9.67) the complexity of Nussbaumer convolution is O(D ln D) operations in the R ring. This is equivalent to the complexity of floating-point FFT methods, if ring operations are thought of as equivalent to floating-point operations. However, with the Nussbaumer method there is a difference: One may choose the digit base B with impunity. Consider a base B ∼ n, sothat b ∼ ln n, in which case one is effectively using D = n/ ln n digits. It turns out that the Nussbaumer method for integer multiplication then takes O(n ln ln n) additions and O(n) multiplications of numbers each having O(ln n) bits.It follows that the complexity of the Nussbaumer method is asymptotically that of the Schönhage method, i.e., O(n ln n ln ln n) bit operations. Such complexity issues for both Nussbaumer and the original Schönhage–Strassen algorithm are discussed in [Bernstein 1997].

508 Chapter 9 FAST ALGORITHMS FOR LARGE-INTEGER ARITHMETIC Algorithm optimal B complexity Basic FFT, fixed-base ... Oop(D ln D) Basic FFT, variable-base O(ln n) O(n(C ln n)(C ln ln n) ...) Schönhage O(n 1/2 ) O(n ln n ln ln n) Nussbaumer O(n/ ln n) O(n ln n ln ln n) Table 9.1 Complexities for fast multiplication algorithms. Operands to be multiplied have n bits each, which during top recursion level are split into D = n/b digits of b bits each, so the digit size (the base) is B =2 b . All bounds are for bit complexity, except that Oop means operation complexity. 9.5.9 Application to the Chinese remainder theorem We described the Chinese remainder theorem in Section 2.1.3, and there gave a method, Algorithm 2.1.7, for reassembling CRT data given some precomputation. We now describe a method that not only takes advantage of preconditioning, but also fast multiplication methods. Algorithm 9.5.26 (Fast CRT reconstruction with preconditioning). Using the nomenclature of Theorem 2.1.6, we assume fixed moduli m0,...,mr−1 whose product is M, but with r =2 k for computational convenience. The goal of the algorithm is to reconstruct n from its given residues (ni). Along the way, tableaux (qij) of partial products and (nij) of partial residues are calculated. The algorithm may be reentered with a new n if the mi remain fixed. 1. [Precomputation] for(0 ≤ i

9.5 Large-integer multiplication 507<br />

know that the complexity must be<br />

O(D ln D)<br />

operations, and as we have said, these are usually, in practice, floatingpoint<br />

operations (both adds and multiplies are bounded in this fashion).<br />

Now the bit complexity is not O((n/b)ln(n/b))—that is, we cannot just<br />

substitute D = n/b in the operation-complexity estimate—because floatingpoint<br />

arithmetic on larger digits must, of course, be more expensive. When<br />

these notions are properly analyzed we obtain the Strassen bound of<br />

O(n(C ln n)(C ln ln n)(C ln ln ln n) ···)<br />

bit operations for the basic FFT multiply, where C is a constant and the<br />

ln ln ··· chain is understood to terminate when it falls below 1. Before we<br />

move ahead with other estimates, we must point out that even though this bit<br />

complexity is not asymptotically optimal, some of the greatest achievements<br />

in the general domain of large-integer arithmetic have been achieved with this<br />

basic Schönhage–Strassen FFT, and yes, using floating-point operations.<br />

Now, the Schönhage Algorithm 9.5.23 gets neatly around the problem<br />

that for a fixed number of signal digits D, the digit operations (small<br />

multiplications) must get more complex for larger operands. Analysis of<br />

the recursion within the algorithm starts with the observation that at top<br />

recursion level, there are two DFTs (but very simple ones—only shifting and<br />

adding occur) and the dyadic multiply. Detailed analysis yields the best-known<br />

complexity bound of<br />

O(n(ln n)(ln ln n))<br />

bit operations, although the Nussbaumer method’s complexity, which we<br />

discuss next, is asymptotically equivalent.<br />

Next, one can see that (as seen in Exercise 9.67) the complexity of<br />

Nussbaumer convolution is<br />

O(D ln D)<br />

operations in the R ring. This is equivalent to the complexity of floating-point<br />

FFT methods, if ring operations are thought of as equivalent to floating-point<br />

operations. However, with the Nussbaumer method there is a difference: One<br />

may choose the digit base B with impunity. Consider a base B ∼ n, sothat<br />

b ∼ ln n, in which case one is effectively using D = n/ ln n digits. It turns out<br />

that the Nussbaumer method for integer multiplication then takes O(n ln ln n)<br />

additions and O(n) multiplications of numbers each having O(ln n) bits.It<br />

follows that the complexity of the Nussbaumer method is asymptotically that<br />

of the Schönhage method, i.e., O(n ln n ln ln n) bit operations. Such complexity<br />

issues for both Nussbaumer and the original Schönhage–Strassen algorithm<br />

are discussed in [Bernstein 1997].

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!