Prime Numbers
Prime Numbers Prime Numbers
9.4 Enhancements for gcd and inverse 467 of tens of thousands of bits (although this “breakover” threshold depends strongly on machinery and on various options such as the choice of an alternative classical gcd algorithm at recursion bottom). As an example application, recall that for inversionless ECM, Algorithm 7.4.4, we require a gcd. If one is attempting to find a factor of the Fermat number F24 (nobody has yet been successful in that) there will be gcd arguments of about 16 million bits, a region where recursive gcds with the above complexity radically dominate, performance-wise, all other alternatives. Later in this section we give some specific timing estimates. The basic idea of the KSgcd scheme is that the remainder and quotient sequences of a classical gcd algorithm differ radically in the following sense. Let x, y each be of size N. Referring to the Euclid Algorithm 2.1.2, denote by (rj,rj+1) forj ≥ 0 the pairs that arise after j passes of the loop. So a remainder sequence is defined as (r0 = x, r1 = y, r2,r3,...). Similarly there is an implicit quotient sequence (q1,q2,...) defined by rj = qj+1rj+1 + rj+2. In performing the classical gcd one is essentially iterating such a quotientremainder relation until some rk is zero, in which case the previous remainder rk−1 is the gcd. Now for the radical difference between the q and r sequences: As enunciated elegantly by [Cesari 1998], the total number of bits in the remainder sequence is expected to be O(ln 2 N), and so naturally any gcd algorithm that refers to every rj is bound to admit, at best, of quadratic complexity. On the other hand, the quotient sequence (q1,...,qk−1) tends to have relatively small elements. The recursive notion stems from the fact that knowing the qj yields any one of the rj in nearly linear time [Cesari 1998]. Let us try an example of remainder-quotient sequences. (We choose moderately large inputs x, y here for later illustration of the recursive idea.) Take (r0,r1) =(x, y) = (31416, 27183), whence r0 = q1r1 + r2 =1· r1 + 4233, r1 = q2r2 + r3 =6· r2 + 1785, r2 = q3r3 + r4 =2· r3 + 663, r3 = q4r4 + r5 =2· r4 + 459, r4 = q5r5 + r6 =1· r5 + 204, r5 = q6r6 + r7 =2· r6 +51, r6 = q7r7 + r8 =4· r7 +0. Evidently, gcd(x, y) = r7 = 51, but notice the quotient sequence goes (1, 6, 2, 2, 1, 2, 4); in fact these are the elements of the simple continued fraction for the rational x/y. The trend is typical: Most quotient elements are expected to be small.
468 Chapter 9 FAST ALGORITHMS FOR LARGE-INTEGER ARITHMETIC To formalize how remainder terms can be gotten from known quotient terms, we can use the matrix-vector identity, valid for ix) { r5 .
- Page 426 and 427: 8.4 Diophantine analysis 417 9262 3
- Page 428 and 429: 8.5 Quantum computation 419 We spea
- Page 430 and 431: 8.5 Quantum computation 421 three H
- Page 432 and 433: 8.5 Quantum computation 423 for a n
- Page 434 and 435: 8.6 Curious, anecdotal, and interdi
- Page 436 and 437: 8.6 Curious, anecdotal, and interdi
- Page 438 and 439: 8.6 Curious, anecdotal, and interdi
- Page 440 and 441: 8.7 Exercises 431 universal Golden
- Page 442 and 443: 8.7 Exercises 433 standards insist
- Page 444 and 445: 8.7 Exercises 435 of positive compo
- Page 446 and 447: 8.8 Research problems 437 element o
- Page 448 and 449: 8.8 Research problems 439 the Leveq
- Page 450 and 451: 8.8 Research problems 441 for every
- Page 452 and 453: Chapter 9 FAST ALGORITHMS FOR LARGE
- Page 454 and 455: 9.1 Tour of “grammar-school” me
- Page 456 and 457: 9.2 Enhancements to modular arithme
- Page 458 and 459: 9.2 Enhancements to modular arithme
- Page 460 and 461: 9.2 Enhancements to modular arithme
- Page 462 and 463: 9.2 Enhancements to modular arithme
- Page 464 and 465: 9.2 Enhancements to modular arithme
- Page 466 and 467: 9.3 Exponentiation 457 Algorithm 9.
- Page 468 and 469: 9.3 Exponentiation 459 But there is
- Page 470 and 471: 9.3 Exponentiation 461 the benefit
- Page 472 and 473: 9.4 Enhancements for gcd and invers
- Page 474 and 475: 9.4 Enhancements for gcd and invers
- Page 478 and 479: 9.4 Enhancements for gcd and invers
- Page 480 and 481: 9.4 Enhancements for gcd and invers
- Page 482 and 483: 9.5 Large-integer multiplication 47
- Page 484 and 485: 9.5 Large-integer multiplication 47
- Page 486 and 487: 9.5 Large-integer multiplication 47
- Page 488 and 489: 9.5 Large-integer multiplication 47
- Page 490 and 491: 9.5 Large-integer multiplication 48
- Page 492 and 493: 9.5 Large-integer multiplication 48
- Page 494 and 495: 9.5 Large-integer multiplication 48
- Page 496 and 497: 9.5 Large-integer multiplication 48
- Page 498 and 499: 9.5 Large-integer multiplication 48
- Page 500 and 501: 9.5 Large-integer multiplication 49
- Page 502 and 503: 9.5 Large-integer multiplication 49
- Page 504 and 505: 9.5 Large-integer multiplication 49
- Page 506 and 507: 9.5 Large-integer multiplication 49
- Page 508 and 509: 9.5 Large-integer multiplication 49
- Page 510 and 511: 9.5 Large-integer multiplication 50
- Page 512 and 513: 9.5 Large-integer multiplication 50
- Page 514 and 515: 9.5 Large-integer multiplication 50
- Page 516 and 517: 9.5 Large-integer multiplication 50
- Page 518 and 519: 9.6 Polynomial arithmetic 509 can i
- Page 520 and 521: 9.6 Polynomial arithmetic 511 Incid
- Page 522 and 523: 9.6 Polynomial arithmetic 513 where
- Page 524 and 525: 9.6 Polynomial arithmetic 515 such
9.4 Enhancements for gcd and inverse 467<br />
of tens of thousands of bits (although this “breakover” threshold depends<br />
strongly on machinery and on various options such as the choice of an alternative<br />
classical gcd algorithm at recursion bottom). As an example application,<br />
recall that for inversionless ECM, Algorithm 7.4.4, we require a gcd. If<br />
one is attempting to find a factor of the Fermat number F24 (nobody has yet<br />
been successful in that) there will be gcd arguments of about 16 million bits,<br />
a region where recursive gcds with the above complexity radically dominate,<br />
performance-wise, all other alternatives. Later in this section we give some<br />
specific timing estimates.<br />
The basic idea of the KSgcd scheme is that the remainder and quotient<br />
sequences of a classical gcd algorithm differ radically in the following sense.<br />
Let x, y each be of size N. Referring to the Euclid Algorithm 2.1.2, denote<br />
by (rj,rj+1) forj ≥ 0 the pairs that arise after j passes of the loop. So a<br />
remainder sequence is defined as (r0 = x, r1 = y, r2,r3,...). Similarly there<br />
is an implicit quotient sequence (q1,q2,...) defined by<br />
rj = qj+1rj+1 + rj+2.<br />
In performing the classical gcd one is essentially iterating such a quotientremainder<br />
relation until some rk is zero, in which case the previous remainder<br />
rk−1 is the gcd. Now for the radical difference between the q and r sequences:<br />
As enunciated elegantly by [Cesari 1998], the total number of bits in the<br />
remainder sequence is expected to be O(ln 2 N), and so naturally any gcd<br />
algorithm that refers to every rj is bound to admit, at best, of quadratic<br />
complexity. On the other hand, the quotient sequence (q1,...,qk−1) tends to<br />
have relatively small elements. The recursive notion stems from the fact that<br />
knowing the qj yields any one of the rj in nearly linear time [Cesari 1998].<br />
Let us try an example of remainder-quotient sequences. (We choose<br />
moderately large inputs x, y here for later illustration of the recursive idea.)<br />
Take<br />
(r0,r1) =(x, y) = (31416, 27183),<br />
whence<br />
r0 = q1r1 + r2 =1· r1 + 4233,<br />
r1 = q2r2 + r3 =6· r2 + 1785,<br />
r2 = q3r3 + r4 =2· r3 + 663,<br />
r3 = q4r4 + r5 =2· r4 + 459,<br />
r4 = q5r5 + r6 =1· r5 + 204,<br />
r5 = q6r6 + r7 =2· r6 +51,<br />
r6 = q7r7 + r8 =4· r7 +0.<br />
Evidently, gcd(x, y) = r7 = 51, but notice the quotient sequence goes<br />
(1, 6, 2, 2, 1, 2, 4); in fact these are the elements of the simple continued fraction<br />
for the rational x/y. The trend is typical: Most quotient elements are expected<br />
to be small.