Prime Numbers

Prime Numbers Prime Numbers

thales.doa.fmph.uniba.sk
from thales.doa.fmph.uniba.sk More from this publisher
10.12.2012 Views

9.4 Enhancements for gcd and inverse 467 of tens of thousands of bits (although this “breakover” threshold depends strongly on machinery and on various options such as the choice of an alternative classical gcd algorithm at recursion bottom). As an example application, recall that for inversionless ECM, Algorithm 7.4.4, we require a gcd. If one is attempting to find a factor of the Fermat number F24 (nobody has yet been successful in that) there will be gcd arguments of about 16 million bits, a region where recursive gcds with the above complexity radically dominate, performance-wise, all other alternatives. Later in this section we give some specific timing estimates. The basic idea of the KSgcd scheme is that the remainder and quotient sequences of a classical gcd algorithm differ radically in the following sense. Let x, y each be of size N. Referring to the Euclid Algorithm 2.1.2, denote by (rj,rj+1) forj ≥ 0 the pairs that arise after j passes of the loop. So a remainder sequence is defined as (r0 = x, r1 = y, r2,r3,...). Similarly there is an implicit quotient sequence (q1,q2,...) defined by rj = qj+1rj+1 + rj+2. In performing the classical gcd one is essentially iterating such a quotientremainder relation until some rk is zero, in which case the previous remainder rk−1 is the gcd. Now for the radical difference between the q and r sequences: As enunciated elegantly by [Cesari 1998], the total number of bits in the remainder sequence is expected to be O(ln 2 N), and so naturally any gcd algorithm that refers to every rj is bound to admit, at best, of quadratic complexity. On the other hand, the quotient sequence (q1,...,qk−1) tends to have relatively small elements. The recursive notion stems from the fact that knowing the qj yields any one of the rj in nearly linear time [Cesari 1998]. Let us try an example of remainder-quotient sequences. (We choose moderately large inputs x, y here for later illustration of the recursive idea.) Take (r0,r1) =(x, y) = (31416, 27183), whence r0 = q1r1 + r2 =1· r1 + 4233, r1 = q2r2 + r3 =6· r2 + 1785, r2 = q3r3 + r4 =2· r3 + 663, r3 = q4r4 + r5 =2· r4 + 459, r4 = q5r5 + r6 =1· r5 + 204, r5 = q6r6 + r7 =2· r6 +51, r6 = q7r7 + r8 =4· r7 +0. Evidently, gcd(x, y) = r7 = 51, but notice the quotient sequence goes (1, 6, 2, 2, 1, 2, 4); in fact these are the elements of the simple continued fraction for the rational x/y. The trend is typical: Most quotient elements are expected to be small.

468 Chapter 9 FAST ALGORITHMS FOR LARGE-INTEGER ARITHMETIC To formalize how remainder terms can be gotten from known quotient terms, we can use the matrix-vector identity, valid for ix) { r5 .

9.4 Enhancements for gcd and inverse 467<br />

of tens of thousands of bits (although this “breakover” threshold depends<br />

strongly on machinery and on various options such as the choice of an alternative<br />

classical gcd algorithm at recursion bottom). As an example application,<br />

recall that for inversionless ECM, Algorithm 7.4.4, we require a gcd. If<br />

one is attempting to find a factor of the Fermat number F24 (nobody has yet<br />

been successful in that) there will be gcd arguments of about 16 million bits,<br />

a region where recursive gcds with the above complexity radically dominate,<br />

performance-wise, all other alternatives. Later in this section we give some<br />

specific timing estimates.<br />

The basic idea of the KSgcd scheme is that the remainder and quotient<br />

sequences of a classical gcd algorithm differ radically in the following sense.<br />

Let x, y each be of size N. Referring to the Euclid Algorithm 2.1.2, denote<br />

by (rj,rj+1) forj ≥ 0 the pairs that arise after j passes of the loop. So a<br />

remainder sequence is defined as (r0 = x, r1 = y, r2,r3,...). Similarly there<br />

is an implicit quotient sequence (q1,q2,...) defined by<br />

rj = qj+1rj+1 + rj+2.<br />

In performing the classical gcd one is essentially iterating such a quotientremainder<br />

relation until some rk is zero, in which case the previous remainder<br />

rk−1 is the gcd. Now for the radical difference between the q and r sequences:<br />

As enunciated elegantly by [Cesari 1998], the total number of bits in the<br />

remainder sequence is expected to be O(ln 2 N), and so naturally any gcd<br />

algorithm that refers to every rj is bound to admit, at best, of quadratic<br />

complexity. On the other hand, the quotient sequence (q1,...,qk−1) tends to<br />

have relatively small elements. The recursive notion stems from the fact that<br />

knowing the qj yields any one of the rj in nearly linear time [Cesari 1998].<br />

Let us try an example of remainder-quotient sequences. (We choose<br />

moderately large inputs x, y here for later illustration of the recursive idea.)<br />

Take<br />

(r0,r1) =(x, y) = (31416, 27183),<br />

whence<br />

r0 = q1r1 + r2 =1· r1 + 4233,<br />

r1 = q2r2 + r3 =6· r2 + 1785,<br />

r2 = q3r3 + r4 =2· r3 + 663,<br />

r3 = q4r4 + r5 =2· r4 + 459,<br />

r4 = q5r5 + r6 =1· r5 + 204,<br />

r5 = q6r6 + r7 =2· r6 +51,<br />

r6 = q7r7 + r8 =4· r7 +0.<br />

Evidently, gcd(x, y) = r7 = 51, but notice the quotient sequence goes<br />

(1, 6, 2, 2, 1, 2, 4); in fact these are the elements of the simple continued fraction<br />

for the rational x/y. The trend is typical: Most quotient elements are expected<br />

to be small.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!