COMPUTATIONAL PROBLEMS IN ABSTRACT ALGEBRA.
COMPUTATIONAL PROBLEMS IN ABSTRACT ALGEBRA. COMPUTATIONAL PROBLEMS IN ABSTRACT ALGEBRA.
194 A. L. Tritter 5. The &-module. Let the eight ring elements so far constructed be called PO, PI, Pz, . . ., P7, in the order in which they were obtained. Then PO is the generator originally given for the &-module and, if we regard the Pi for the moment as elements of the group ring of S, over GF(2), we have Pi = Pi-lRp (O-=i-z8); the 8! rows of our data matrix are seen to be the elements Pig (0
196 A. L. Tritter a question is most straightforwardly answered by developing a triangular basis for the space spanned by the row vectors, and “pivoting” each basis vector in turn “out” of the target vector; if there appears no basis vector at all to correspond to some non-zero coordinate of the target vector then the answer is found to be “no”, if the target vector suddenly disappears then we know the answer is “yes”. Triangularizing a matrix is neither entertaining nor interesting. When interest is aroused, it is by one or other of the effects that scale can have on this problem. Most usually, the interest lies in problems of loss of significance caused by limitations on the accuracy with which matrix elements can be retained. But our problem is not of this kind; as our matrix is over a finite field, we necessarily retain absolute accuracy. Indeed, as the field is GF(2), each exact matrix element occupies only one bit in memory, vector addition is represented by the “exclusive-or” operation, and multiplication does not arise. In our problem, the sheer immensity of the data leads to a class of questions concerning the distribution in time of machine errors. Once we have said this, we have opened Pandora’s box. When a mathematical “proof” is based in part upon error-preventing, error-detecting, and error-correcting techniques whose reliability is statistical in character, the nature of proof is utterly unlike anything David Hilbert might have recognized as such. But this problem goes far beyond the scope of the present paper; we shall attempt to discuss it elsewhere. In this paper, we shall do no more than to look briefly at some of the methods actually employed in the computer program which makes the calculation. If the answer to our question is “yes”, if the target vector is a linear combination of the data vectors, then it is found to be so in an explicit way, and detailed record-keeping along the way should make it possible to say something like : The target vector is the sum of such and such basis vectors and, for each of them, this basis vector is the sum of that data vector and those earlier basis vectors. In this case we should have available an effective, and cheap, way of confirming that the answer is indeed “yes”. But suppose the answer to be “no”. Then we are asserting that there are 2s! linear combinations of the data vectors, and that the target vector is none of them. How can such an assertion be verified, except exhaustively (i.e. by doing it again)? Well, there is one thing available for us to try-if the matrix actually yielded 7! basis vectors, there has surely been an error. But if only a smaller number have appeared, we are on no surer ground than before. Let us recapitulate. An affiative answer would be verified if we were to maintain records from which the “pedigree” of the target vector and A module-theoretic computation 197 every basis vector could be determined. A negative answer could be made marginally more convincing if we knew the rank of the data matrix. But, in general, to be justified in accepting a negative answer, we must repeat the entire calculation at least once. Suppose we do so. And suppose the answer is now “yes”. What do we do ? Or suppose the answer is again “no”. How reliable is a single bit? Should we try again ? Or what? What, in fact, we do is this. We do, of course, keep the pedigree records, and we always know how many basis vectors have been found. But, in addition, every operation (including the checking operations) is performed twice, and the results are compared, bit by bit. In a single magnetic tape pass, the data matrix is read in from two separate files assumed to hold identical data, and, for so long as that is true, the common value of the input data is accepted; as wide as possible a section of the data matrix is processed en passant and the result is written out as a data file. Then it is done again; the same two data files are read in and compared a second time, their common value processed a second time, and the result written out as a second file which should be identical to the first. These two data files are the input to the next tape pass, when they will be compared bit by bit (and each file consists of more than 200 million bits), twice, and so on. The basis vectors developed on the first run through a pass go onto an output tape, those from the second onto another, at the time; these are compared explicitly, twice (of course), after the second time through. If at any stage a discrepancy is discovered between tapes whose contents should be identical, elaborate signalling and recovery procedures are automatically put into operation; we shall say no more about these but that we have available, on a typical tape pass, three sets of tapes, called A, B, and C, such that if we are now reading data from B and writing it to C, then B was written on the previous pass and A was being read on that pass. That way, if we discover a discrepancy in the B tapes after having written upon the C tapes, the A tapes (from which B were made, and which have already satisfied two bit-by-bit comparisons) are still intact. When I say that the data matrix occupies a file taking up two reels of tape, you will see that this program keeps 14 tape drives busy - A, B, C, each 2 reels long, + 1 reel output for basis vectors, X2 copies of everything. A magnetic tape pass takes about 18 minutes from beginning to end, of which the second half is a repetition of the first; the program can be interrupted under switch control, with effectively no wastage of machine time, at any such breakpoint. Another switch will cause the program to pause at the next breakpoint and await the instruction to interrupt or to proceed; yet another, inspected by the program three times per second of elapsed time, says to interrupt at once, discarding anything done since the last previous breakpoint. Other switches instruct the program whether to stop or to go on to get the rank of the data matrix in case the answer
- Page 51 and 52: 92 John McKay defining multiplicati
- Page 53 and 54: 96 John McKay Construction of chara
- Page 55 and 56: 100 John McKay In the table, c indi
- Page 57 and 58: 104 C. Brott and J. Neubiiser irred
- Page 59 and 60: 108 C. Brott and J. Neubiiser eleme
- Page 61 and 62: 112 J. S. Frame The characters of t
- Page 63 and 64: 116 J. S. Frame 3. The decompositio
- Page 65 and 66: 120 J. S. Frame The characters of t
- Page 67 and 68: 124 J. S. Frame The characters of t
- Page 69 and 70: TABLE 5. The Z, character block for
- Page 71 and 72: 132 R. Biilow and J. Neubiiser Deri
- Page 73 and 74: A search for simple groups of order
- Page 75 and 76: 140 Marshall Hall Jr. Simple groups
- Page 77 and 78: 144 Marshall Hall Jr. Simple groups
- Page 79 and 80: 148 Marshall Hall Jr. otherwise no
- Page 81 and 82: 152 Marshall Hall Jr. easily found
- Page 83 and 84: 156 Marshall Hall Jr. Simple groups
- Page 85 and 86: 160 Marshall Hail Jr. This leads to
- Page 87 and 88: 164 Marshall Hall Jr. b= (OO)(Ol, 0
- Page 89 and 90: 168 Marshall Hall Jr. 11. R. BRAUER
- Page 91 and 92: 172 Charles C. Sims THEOREM 2.3. Th
- Page 93 and 94: 176 Charles C. Sims has a set of im
- Page 95 and 96: 180 Degrc - No. Order t N (-63 Gene
- Page 97 and 98: An algorithm related to the restric
- Page 99 and 100: A module-theoretic computation rela
- Page 101: 192 A. L. Tritter expressed in the
- Page 105 and 106: 200 John J. Cannon stack. The Cayle
- Page 107 and 108: , I The computation of irreducible
- Page 109 and 110: 208 P. G. Ruud and R. Keown product
- Page 111 and 112: 212 P. G. Ruud and R. Keown TX wher
- Page 113 and 114: 216 P. G. Ruud and R. Keown 4. L. G
- Page 115 and 116: 220 N. S. Mendelsohn where it is kn
- Page 117 and 118: 224 Robert J. Plemmons such class [
- Page 119 and 120: 228 Robert J. Plemmons ! TOTALS Sem
- Page 121 and 122: 232 Takayuki Tamura 8” = 0B if an
- Page 123 and 124: 236 Takayuki Tamura Case ,Q = {e).
- Page 125 and 126: 240 Takayuki Tamura The calculation
- Page 127 and 128: 244 Takayuki Tamura We calculate46
- Page 129 and 130: 248 Takayuki Tamura Immediately we
- Page 131 and 132: 252 Takayuki Tamura Let U be a subs
- Page 133 and 134: 256 Takayuki Tamura TABLE 8. AI1 Se
- Page 135 and 136: I The author and R. Dickinson have
- Page 137 and 138: 264 Donald E. Knuth and Peter B. Be
- Page 139 and 140: 268 Donald E. Knuth and Peter B. Be
- Page 141 and 142: 272 Donald E. Knuth and Peter B. Be
- Page 143 and 144: 276 Donald E. Knuth and Peter B. Be
- Page 145 and 146: 280 Donald E. Knuth and Peter B. Be
- Page 147 and 148: 284 Donald E. Knuth and Peter B. Be
- Page 149 and 150: 288 Donald E. Knuth and Peter B. Be
- Page 151 and 152: 292 Donald E. Knuth and Peter B. Be
194 A. L. Tritter<br />
5. The &-module. Let the eight ring elements so far constructed be<br />
called PO, PI, Pz, . . ., P7, in the order in which they were obtained. Then<br />
PO is the generator originally given for the &-module and, if we regard the Pi<br />
for the moment as elements of the group ring of S, over GF(2), we have<br />
Pi = Pi-lRp (O-=i-z8); the 8! rows of our data matrix are seen to be the<br />
elements Pig (0