10.12.2012 Views

Prime Numbers

Prime Numbers

Prime Numbers

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

6.3 Rigorous factoring 301<br />

allows one to use the same complexity estimates that one would have if one<br />

had sieved instead.<br />

Assuming that about a total of B 2 pairs a, b are put into the linear form<br />

a − bm, at the end, a total of B 2 k pairs of the linear form and the norm<br />

form of a polynomial are checked for simultaneous smoothness (the first being<br />

B-smooth, the second B/k-smooth). If the parameters are chosen so that<br />

at most B 2 /k pairs a, b survive the first sieve, then the total time spent is<br />

not much more than B 2 total. This savings leads to a lower complexity in<br />

NFS. Coppersmith gives a heuristic argument that with an optimal choice of<br />

parameters the running time to factor n is exp (c + o(1))(ln n) 1/3 (ln ln n) 2/3 ,<br />

where<br />

c = 1<br />

<br />

92 + 26<br />

3<br />

√ 1/3 13 ≈ 1.9019.<br />

This compares with the value c = (64/9) 1/3 ≈ 1.9230 for the NFS as<br />

described in Algorithm 6.2.5. As mentioned previously, the smaller c in<br />

Coppersmith’s method is offset by a “fatter” o(1). This secondary factor likely<br />

makes the crossover point, after which Coppersmith’s variant is superior, in<br />

the thousands of digits. Before we reach this point, NFS will probably have<br />

been replaced by far better methods. Nevertheless, Coppersmith’s variant of<br />

NFS currently stands as the asymptotically fastest heuristic factoring method<br />

known.<br />

There may yet be some practical advantage to using many polynomials.<br />

For a discussion, see [Elkenbracht-Huizing 1997].<br />

6.3 Rigorous factoring<br />

None of the factoring methods discussed so far in this chapter are rigorous.<br />

However, the subexponential ECM, discussed in the next chapter, comes close<br />

to being rigorous. Assuming a reasonable conjecture about the distribution<br />

in short intervals of smooth numbers, [Lenstra 1987] shows that ECM is<br />

expected to find the least prime factor p of the composite number n in<br />

exp((2 + o(1)) √ ln p ln ln p) arithmetic operations with integers the size of n,<br />

the “o(1)” term tending to 0 as p →∞. Thus, ECM requires only one heuristic<br />

“leap.” In contrast, QS and NFS seem to require several heuristic leaps in their<br />

analyses.<br />

It is of interest to see what is the fastest factoring algorithm that we can<br />

rigorously analyze. This is not necessarily of practical value, but seems to be<br />

required by the dignity of the subject!<br />

The first issue one might address is whether a factoring algorithm<br />

is deterministic or probabilistic. Since randomness is such a powerful<br />

tool, we would expect to see lower complexity records for probabilistic<br />

factoring algorithms over deterministic ones, and indeed we do. The fastest<br />

deterministic factoring algorithm that has been rigorously analyzed is the<br />

Pollard–Strassen method. This uses fast polynomial evaluation techniques as<br />

discussed in Section 5.5, where the running time to factor n is seen to be<br />

O n 1/4+o(1) .

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!