Prime Numbers

Prime Numbers Prime Numbers

thales.doa.fmph.uniba.sk
from thales.doa.fmph.uniba.sk More from this publisher
10.12.2012 Views

9.8 Research problems 535 less than a power of two. But another is to change the Step [Check breakover threshold ...] to test just whether len(T )isodd. These kinds of approaches will ensure that halving of signals can proceed during recursion. 9.8 Research problems 9.77. As we have intimated, the enhancements to power ladders can be intricate, in many respects unresolved. In this exercise we tour some of the interesting problems attendant on such enhancements. When an inverse is in hand (alternatively, when point negations are available in elliptic algebra), the add/subtract ladder options make the situation more interesting. The add/subtract ladder Algorithm 7.2.4, for example, has an interesting “stochastic” interpretation, as follows. Let x denote a real number in (0, 1) and let y be the fractional part of 3x; i.e., y =3x −⌊3x⌋. Then denote the exclusive-or of x, y by z = x ∧ y, meaning z is obtained by an exclusive-or of the bit streams of x and y together. Now investigate this conjecture: If x, y are chosen at random, then with probability 1, one-third of the binary bits of z are ones. If true, this conjecture means that if you have a squaring operation that takes time S, and a multiply operation that takes time M, then Algorithm 7.2.4 takes about time (S + M/3)b, when the relevant operands have b binary bits. How does this compare with the standard binary ladders of Algorithms 9.3.1, 9.3.2? How does it compare with a base-(B = 3) case of the general windowing ladder Algorithm 9.3.3? (In answering this you should be able to determine whether the add/subtract ladder is equivalent or not to some windowing ladder.) Next, work out a theory of precise squaring and addition counts for practical ladders. For example, a more precise complexity estimate for he left-right binary ladder is C ∼ (b(y) − 1)S +(o(y) − 1)M, where the exponent y has b(y) total bits, of which o(y) are 1’s. Such a theory should be extended to the windowing ladders, with precomputation overhead not ignored. In this way, describe quantitatively what sort of ladder would be best for a typical cryptography application; namely, x, y have say 192 bits each and x y is to be computed modulo some 192-bit prime. Next, implement an elliptic multiplication ladder in base B = 16, which means as in Algorithm 9.3.3 that four bits at a time of the exponent are processed. Note that, as explained in the text following the windowing ladder algorithm, you would need only the following point multiples: P, 3P, 5P, 7P .Of course, one should be precomputing these small multiples also in an efficient manner. Next, study yet other ladder options (and this kind of extension to the exercise reveals just how convoluted is this field of study) as described in

536 Chapter 9 FAST ALGORITHMS FOR LARGE-INTEGER ARITHMETIC [Müller 1997], [De Win et al. 1998], [Crandall 1999b] and references therein. As just one example of attempted refinements, some investigators have considered exponent expansions in which there is some guaranteed number of 0’s interposed between other digits. Then, too, there is the special advantage inherent in highly compressible exponents [Yacobi 1999], such study being further confounded by the possibility of base-dependent compressibility. It is an interesting research matter to ascertain the precise relation between the compressibility of an exponent and the optimal efficiency of powering to said exponent. 9.78. In view of complexity results such as in Exercise 9.37, it would seem that a large-D version of Toom–Cook could, with recursion, be brought down to what is essentially an ideal bit complexity O ln 1+ɛ N . However, as we have intimated, the additions grow rapidly. Work out a theory of Toom–Cook addition counts, and discuss the tradeoffs between very low multiplication complexity and overwhelming complexity of additions. Note also the existence of addition optimizations, as intimated in Exercise 9.38. This is a difficult study, but of obvious practical value. For example, there is nothing a priori preventing us from employing different, alternating Toom– Cook schemes within a single, large recursive multiply. Clearly, to optimize such a mixed scheme one should know something about the interplay of the multiply and add counts, as well as other aspects of overhead. Yet another such aspect is the shifting and data shuttling one must do to break up an integer into its Toom–Cook coefficients. 9.79. How far should one be able to test numerically the Goldbach conjecture by considering the acyclic convolution of the signal G =(1, 1, 1, 0, 1, 1, 0, 1, 1, 0,...) with itself? (Here, as in the text, the signal element Gn equals 1 if and only if 2n + 3 is prime.) What is the computational complexity for this convolutionbased approach for the settling of Goldbach’s conjecture for all even numbers not exceeding x? Note that the conjecture has been settled for all even numbers up to x = 4 · 10 14 [Richstein 2001]. We note that explicit FFTbased computations up to 10 8 or so have indeed been performed [Lavenier and Saouter 1998]. Here is an interesting question: Can one resolve Goldbach representations via pure-integer convolution on arrays of b-bit integers (say b = 16 or 32), with prime locations signified by 1 bits, knowing in advance that two prime bits lying in one integer is a relatively rare occurrence? 9.80. One can employ convolution ideas to analyze certain higher-order additive problems in rings ZN, and perhaps in more complicated settings leading into interesting research areas. Note that Exercise 9.41 deals with sums of squares. But when higher powers are involved, the convolution and spectral manipulations are problematic. To embark on the research path intended herein, start by considering a kth powers exponential sum (the square and cubic versions appear in Exercise

536 Chapter 9 FAST ALGORITHMS FOR LARGE-INTEGER ARITHMETIC<br />

[Müller 1997], [De Win et al. 1998], [Crandall 1999b] and references therein.<br />

As just one example of attempted refinements, some investigators have<br />

considered exponent expansions in which there is some guaranteed number of<br />

0’s interposed between other digits. Then, too, there is the special advantage<br />

inherent in highly compressible exponents [Yacobi 1999], such study being<br />

further confounded by the possibility of base-dependent compressibility. It is<br />

an interesting research matter to ascertain the precise relation between the<br />

compressibility of an exponent and the optimal efficiency of powering to said<br />

exponent.<br />

9.78. In view of complexity results such as in Exercise 9.37, it would seem<br />

that a large-D version of Toom–Cook could, with recursion, be brought down<br />

to what is essentially an ideal bit complexity O ln 1+ɛ N . However, as we<br />

have intimated, the additions grow rapidly. Work out a theory of Toom–Cook<br />

addition counts, and discuss the tradeoffs between very low multiplication<br />

complexity and overwhelming complexity of additions. Note also the existence<br />

of addition optimizations, as intimated in Exercise 9.38.<br />

This is a difficult study, but of obvious practical value. For example, there<br />

is nothing a priori preventing us from employing different, alternating Toom–<br />

Cook schemes within a single, large recursive multiply. Clearly, to optimize<br />

such a mixed scheme one should know something about the interplay of the<br />

multiply and add counts, as well as other aspects of overhead. Yet another<br />

such aspect is the shifting and data shuttling one must do to break up an<br />

integer into its Toom–Cook coefficients.<br />

9.79. How far should one be able to test numerically the Goldbach<br />

conjecture by considering the acyclic convolution of the signal<br />

G =(1, 1, 1, 0, 1, 1, 0, 1, 1, 0,...)<br />

with itself? (Here, as in the text, the signal element Gn equals 1 if and only if<br />

2n + 3 is prime.) What is the computational complexity for this convolutionbased<br />

approach for the settling of Goldbach’s conjecture for all even numbers<br />

not exceeding x? Note that the conjecture has been settled for all even<br />

numbers up to x = 4 · 10 14 [Richstein 2001]. We note that explicit FFTbased<br />

computations up to 10 8 or so have indeed been performed [Lavenier<br />

and Saouter 1998]. Here is an interesting question: Can one resolve Goldbach<br />

representations via pure-integer convolution on arrays of b-bit integers (say<br />

b = 16 or 32), with prime locations signified by 1 bits, knowing in advance<br />

that two prime bits lying in one integer is a relatively rare occurrence?<br />

9.80. One can employ convolution ideas to analyze certain higher-order<br />

additive problems in rings ZN, and perhaps in more complicated settings<br />

leading into interesting research areas. Note that Exercise 9.41 deals with<br />

sums of squares. But when higher powers are involved, the convolution and<br />

spectral manipulations are problematic.<br />

To embark on the research path intended herein, start by considering a kth<br />

powers exponential sum (the square and cubic versions appear in Exercise

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!