Prime Numbers
Prime Numbers Prime Numbers
8.3 Quasi-Monte Carlo (qMC) methods 411 Algorithm 8.3.6 is usually used in floating-point mode, i.e., with stored floating-point inverse powers qi,j but integer digits ni,j. However, there is nothing wrong in principle with an exact generator in which actual integer powers are kept for the qi,j. In fact, the integer mode can be used for testing of the algorithm, in the following interesting way. Take, for example, N = 1000, so vectors x0,...,x999 are allowed, and choose D = 2 dimensions so that the primes 2,3 are involved. Then call seed(701), which sets the variable x to be the vector x701 = (757/1024, 719/729). Now, calling random() exactly 9 times produces x710 = (397/1024, 674/729), and sure enough, we can test the integrity of the algorithm by going back and calling seed(710) to verify that starting over thus with seed value 701+9 gives precisely the x710 shown. It is of interest that Algorithm 8.3.6 really is fast, at least in this sense: In practice, it tends to be faster even than calling a system’s built-in random-number function. And this advantage has meaning even outside the numerical-integration paradigm. When one really wants an equidistributed, random number in [0, 1), say, a system’s random function should certainly be considered, especially if the natural tendency for random samples to clump and separate is supposed to remain intact. But for many statistical studies, one simply wants some kind if irregular “coverage” of [0, 1), one might say a “fair” coverage that does not bias any particular subinterval, in which case such a fast qMC algorithm should be considered. Now we may get a multidimensional integral by calling, in a very simple way, the procedures of Algorithm 8.3.6: Algorithm 8.3.7 (qMC multidimensional integration). Given a dimension D, and integrable function f : R → R, where R = [0, 1] D , this algorithm estimates the multidimensional integral I = f(x) d D x, x∈R via the generation of N0 qMC vectors, starting with the n-th of a sequence (x0,x1,...,xn,...,xn+N0−1,...). It is assumed that Algorithm 8.3.6 is initialized with an index bound N ≥ n + N0. 1. [Initialize via Algorithm 8.3.6] seed(n); // Start the qMC process, to set a global x = xn. I =0; 2. [Perform qMC integration] // Function random() updates a global qMC vector (Algorithm 8.3.6). for(0 ≤ j
412 Chapter 8 THE UBIQUITY OF PRIME NUMBERS Let us give an example of the application of such an algorithm. To assess the volume of the unit D-ball, which is the ball of radius 1, we can take f in terms of the Heaviside function θ (which is 1 for positive arguments, 0 for negative arguments, and 1/2 at0), f(x) =θ(1/4 − (x − y) · (x − y)), with y =(1/2, 1/2,...,1/2), so that f vanishes everywhere outside a ball of radius 1/2. (This is the largest ball that fits inside the cube R.) The estimate of the unit D-ball volume will thus be 2 D I,whereI is the output of Algorithm 8.3.7 for the given, sphere-defining function f. As we have intimated before, it is a wondrous thing to see firsthand how much better a qMC algorithm of this type can do, when compared to a direct Monte Carlo trial. One beautiful aspect of the fundamental qMC concept is that parallelism is easy: In Algorithm 8.3.7, just start each of, say, M machines at a different starting seed, ideally in such a way that some contiguous sequence of NM total vectors is realized. This option is, of course, the point of having a seed function in the first place. Explicitly, to obtain a one-billion-point integration, each of 100 machines would use the above algorithm as is with N =10 7 , except that machine 0 would start with n =0 (and hence start by calling seed(0)), the second machine would start n =1, through machine 99, which would start with n = 99. The final integral would be the average of the 100 machine estimates. Here is a typical numerical comparison: We shall calculate the number π with qMC methods, and compare with direct Monte Carlo. Noting that the exact volume of the unit D-ball is VD = π D/2 Γ(1 + D/2) , let us denote by VD(N) the calculated volume after N vectors are generated, and denote by πN the “experimental” value for π obtained by solving the volume formula for π in terms of VD. We shall do two things at once: Display the typical convergence and convey a notion of the inherent parallelism. For primes p =2, 3, 5, so that we are assessing the 3-ball volume, the result of Algorithm 8.3.7 is displayed in Table 8.1. What is displayed in the left-hand column is the total number of points “dropped” into the unit D-cube, while the second column is the associated, cumulative approximation to π. We say cumulative because one may have runeachintervalof10 6 counts on a separate machine, yet we display the right-hand column as the answer obtained by combining the machines up to that N value inclusive. For example, the result π5 can be thought of either as the result after 5 · 10 6 points are generated, or equivalently, after 5 separate machines each do 10 6 points. In the latter instance, one would have called the seed(n) procedure with 5 different initial seeds to start each respective machine’s interval. How do these data compare with direct Monte Carlo? The rough answer is that one can expect the error in the last (N =10 7 )rowof
- Page 370 and 371: 7.5 Counting points on elliptic cur
- Page 372 and 373: 7.5 Counting points on elliptic cur
- Page 374 and 375: 7.5 Counting points on elliptic cur
- Page 376 and 377: 7.5 Counting points on elliptic cur
- Page 378 and 379: 7.6 Elliptic curve primality provin
- Page 380 and 381: 7.6 Elliptic curve primality provin
- Page 382 and 383: 7.6 Elliptic curve primality provin
- Page 384 and 385: 7.7 Exercises 375 7.4. As in Exerci
- Page 386 and 387: 7.7 Exercises 377 (some Bj equals A
- Page 388 and 389: 7.7 Exercises 379 This reduction ig
- Page 390 and 391: 7.8 Research problems 381 multiply-
- Page 392 and 393: 7.8 Research problems 383 highly ef
- Page 394 and 395: 7.8 Research problems 385 is prime.
- Page 396 and 397: Chapter 8 THE UBIQUITY OF PRIME NUM
- Page 398 and 399: 8.1 Cryptography 389 is, if an orac
- Page 400 and 401: 8.1 Cryptography 391 Algorithm 8.1.
- Page 402 and 403: 8.1 Cryptography 393 just to genera
- Page 404 and 405: 8.1 Cryptography 395 where in the l
- Page 406 and 407: 8.2 Random-number generation 397 ar
- Page 408 and 409: 8.2 Random-number generation 399 Al
- Page 410 and 411: 8.2 Random-number generation 401 }
- Page 412 and 413: 8.2 Random-number generation 403 is
- Page 414 and 415: 8.3 Quasi-Monte Carlo (qMC) methods
- Page 416 and 417: 8.3 Quasi-Monte Carlo (qMC) methods
- Page 418 and 419: 8.3 Quasi-Monte Carlo (qMC) methods
- Page 422 and 423: 8.3 Quasi-Monte Carlo (qMC) methods
- Page 424 and 425: 8.4 Diophantine analysis 415 [Tezuk
- Page 426 and 427: 8.4 Diophantine analysis 417 9262 3
- Page 428 and 429: 8.5 Quantum computation 419 We spea
- Page 430 and 431: 8.5 Quantum computation 421 three H
- Page 432 and 433: 8.5 Quantum computation 423 for a n
- Page 434 and 435: 8.6 Curious, anecdotal, and interdi
- Page 436 and 437: 8.6 Curious, anecdotal, and interdi
- Page 438 and 439: 8.6 Curious, anecdotal, and interdi
- Page 440 and 441: 8.7 Exercises 431 universal Golden
- Page 442 and 443: 8.7 Exercises 433 standards insist
- Page 444 and 445: 8.7 Exercises 435 of positive compo
- Page 446 and 447: 8.8 Research problems 437 element o
- Page 448 and 449: 8.8 Research problems 439 the Leveq
- Page 450 and 451: 8.8 Research problems 441 for every
- Page 452 and 453: Chapter 9 FAST ALGORITHMS FOR LARGE
- Page 454 and 455: 9.1 Tour of “grammar-school” me
- Page 456 and 457: 9.2 Enhancements to modular arithme
- Page 458 and 459: 9.2 Enhancements to modular arithme
- Page 460 and 461: 9.2 Enhancements to modular arithme
- Page 462 and 463: 9.2 Enhancements to modular arithme
- Page 464 and 465: 9.2 Enhancements to modular arithme
- Page 466 and 467: 9.3 Exponentiation 457 Algorithm 9.
- Page 468 and 469: 9.3 Exponentiation 459 But there is
412 Chapter 8 THE UBIQUITY OF PRIME NUMBERS<br />
Let us give an example of the application of such an algorithm. To assess the<br />
volume of the unit D-ball, which is the ball of radius 1, we can take f in terms<br />
of the Heaviside function θ (which is 1 for positive arguments, 0 for negative<br />
arguments, and 1/2 at0),<br />
f(x) =θ(1/4 − (x − y) · (x − y)),<br />
with y =(1/2, 1/2,...,1/2), so that f vanishes everywhere outside a ball of<br />
radius 1/2. (This is the largest ball that fits inside the cube R.) The estimate<br />
of the unit D-ball volume will thus be 2 D I,whereI is the output of Algorithm<br />
8.3.7 for the given, sphere-defining function f.<br />
As we have intimated before, it is a wondrous thing to see firsthand<br />
how much better a qMC algorithm of this type can do, when compared to<br />
a direct Monte Carlo trial. One beautiful aspect of the fundamental qMC<br />
concept is that parallelism is easy: In Algorithm 8.3.7, just start each of, say,<br />
M machines at a different starting seed, ideally in such a way that some<br />
contiguous sequence of NM total vectors is realized. This option is, of course,<br />
the point of having a seed function in the first place. Explicitly, to obtain<br />
a one-billion-point integration, each of 100 machines would use the above<br />
algorithm as is with N =10 7 , except that machine 0 would start with n =0<br />
(and hence start by calling seed(0)), the second machine would start n =1,<br />
through machine 99, which would start with n = 99. The final integral would<br />
be the average of the 100 machine estimates.<br />
Here is a typical numerical comparison: We shall calculate the number π<br />
with qMC methods, and compare with direct Monte Carlo. Noting that the<br />
exact volume of the unit D-ball is<br />
VD =<br />
π D/2<br />
Γ(1 + D/2) ,<br />
let us denote by VD(N) the calculated volume after N vectors are generated,<br />
and denote by πN the “experimental” value for π obtained by solving the<br />
volume formula for π in terms of VD. We shall do two things at once: Display<br />
the typical convergence and convey a notion of the inherent parallelism. For<br />
primes p =2, 3, 5, so that we are assessing the 3-ball volume, the result of<br />
Algorithm 8.3.7 is displayed in Table 8.1.<br />
What is displayed in the left-hand column is the total number of points<br />
“dropped” into the unit D-cube, while the second column is the associated,<br />
cumulative approximation to π. We say cumulative because one may have<br />
runeachintervalof10 6 counts on a separate machine, yet we display the<br />
right-hand column as the answer obtained by combining the machines up to<br />
that N value inclusive. For example, the result π5 can be thought of either as<br />
the result after 5 · 10 6 points are generated, or equivalently, after 5 separate<br />
machines each do 10 6 points. In the latter instance, one would have called<br />
the seed(n) procedure with 5 different initial seeds to start each respective<br />
machine’s interval. How do these data compare with direct Monte Carlo? The<br />
rough answer is that one can expect the error in the last (N =10 7 )rowof