10.12.2012 Views

Prime Numbers

Prime Numbers

Prime Numbers

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

1.1 Problems and progress 9<br />

some authors denote bit and operation complexity by such as Ob, Oop<br />

respectively. So when an algorithm’s complexity is cast in “O” form, we<br />

shall endeavor to specify in every case whether we mean bit or operation<br />

complexity. One should take care that these are not necessarily proportional,<br />

for it matters whether the “operations” are in a field, are adds or multiplies,<br />

or are comparisons (as occur within “if” statements). For example, we shall<br />

see in Chapter 8.8 that whereas a basic FFT multiplication method requires<br />

O(D ln D) floating-point operations when the operands possess D digits<br />

each (in some appropriate base), there exist methods having bit complexity<br />

O(n ln n ln ln n), where now n is the total number of operand bits. So in such a<br />

case there is no clear proportionality at work, the relationships between digit<br />

size, base, and bit size n are nontrivial (especially when floating-point errors<br />

figure into the computation), and so on. Another kind of nontrivial comparison<br />

might involve the Riemann zeta function, which for certain arguments can be<br />

evaluated to D good digits in O(D) operations, but we mean full-precision,<br />

i.e., D-digit operations. In contrast, the bit complexity to obtain D good<br />

digits (or a proportional number of bits) grows faster than this. And of<br />

course, we have a trivial comparison of the two complexities: The product<br />

of two large integers takes one (high-precision) operation, while a flurry of bit<br />

manipulations are generally required to effect this multiply! On the face of it,<br />

we are saying that there is no obvious relation between these two complexity<br />

bounds. One might ask,“if these two types of bounds (bit- and operationbased<br />

bounds) are so different, isn’t one superior, maybe more profound than<br />

the other?” The answer is that one is not necessarily better than the other. It<br />

might happen that the available machinery—hardware and software—is best<br />

suited for all operations to be full-precision; that is, every add and multiply<br />

is of the D-digit variety, in which case you are interested in the operationcomplexity<br />

bound. If, on the other hand, you want to start from scratch<br />

and create special, optimal bit-complexity operations whose precision varies<br />

dynamically during the whole project, then you would be more interested in<br />

the bit-complexity bound. In general, the safe assumption to remember is that<br />

bit- versus operation-complexity comparisons can often be of the “apples and<br />

oranges” variety.<br />

Because the phrase “running time” has achieved a certain vogue, we<br />

shall sometimes use this term as interchangeable with “bit complexity.”<br />

This equivalence depends, of course, on the notion that the real, physical<br />

time a machine requires is proportional to the total number of relevant bit<br />

operations. Though this equivalence may well decay in the future—what<br />

with quantum computing, massive parallelism, advances in word-oriented<br />

arithmetic architecture, and so on—we shall throughout this book just assume<br />

that running time and bit complexity are the same. Along the same lines, by<br />

“polynomial-time” complexity we mean that bit operations are bounded above<br />

by a fixed power of the number of bits in the input operands. So, for example,<br />

none of the dominant factoring algorithms of today (ECM, QS, NFS) is<br />

polynomial-time, but simple addition, multiplication, powering, and so on are<br />

polynomial-time. For example, powering, that is computing x y mod z, using

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!