20.03.2013 Views

II. Notes on Data Structuring * - Cornell University

II. Notes on Data Structuring * - Cornell University

II. Notes on Data Structuring * - Cornell University

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

154 c.A.R. HOARE<br />

to ensure that the sizes and locati<strong>on</strong> of the sequences and secti<strong>on</strong>s be chosen<br />

to corresp<strong>on</strong>d closely with the access characteristics of the storage medium.<br />

10.1.4. Locally Dense Representati<strong>on</strong><br />

A special case of a sparse array encountered in numerical computer appli-<br />

cati<strong>on</strong>s is the sparse matrix. Quite frequently a sparse matrix can be split<br />

into submatrices, <strong>on</strong>ly a few of which c<strong>on</strong>tain significant n<strong>on</strong>-zero entries.<br />

In this case, the matrix may be said to be locally dense, and should be<br />

represented and processed in a manner which takes advantage of this fact.<br />

One method of achieving this is to store with each significant submatrix<br />

its positi<strong>on</strong> and size, and to represent the whole matrix as a table or sequence<br />

off such submatrices, where each submatrix is stored c<strong>on</strong>tiguously in the<br />

usual way, using multiplicative address calculati<strong>on</strong>. However, the sub-<br />

matrices will in general be of different sizes, and if the size varies during the<br />

processing of the matrix, the problems will be quite severe. A possible way of<br />

dealing with sparse matrices is to split them into submatrices of standard<br />

size, say sixteen by sixteen, and set up a table of pointers to each of these<br />

submatrices. A submatrix that is wholly zero is represented by a null pointer<br />

and occupies no additi<strong>on</strong>al storage; otherwise, the submatrix is stored in the<br />

usual way, using the following method of address calculati<strong>on</strong>.<br />

Each access to the array involves first "interleaving" the bit values of the<br />

two subscripts, so that the least significant part of the result c<strong>on</strong>tains the least<br />

significant part of both subscripts. The more significant part of the result is<br />

then used to c<strong>on</strong>sult the table of addresses, to locate the desired submatrix,<br />

and the less significant part to find the positi<strong>on</strong> off the required element<br />

within the submatrix. This technique of interleaving subscripts may <strong>on</strong><br />

some machines be more efficient than general multiplicati<strong>on</strong>. If some of the<br />

submatrices have to be held <strong>on</strong> backing store, this method of address calcu-<br />

lati<strong>on</strong> is particularly recommended, since it is equally efficient at processing<br />

the matrix by rows as by columns; and the method can then be recommended<br />

for all large arrays, whether sparse or not, particularly <strong>on</strong> a paged computer.<br />

The inventor of this method is Professor E. W. Dijkstra.<br />

10.1.5. Grid Representati<strong>on</strong><br />

The phenomen<strong>on</strong> of cross-classificati<strong>on</strong> of files causes as many problems in a<br />

computer as it does in real life. It is usually solved by standardising <strong>on</strong> <strong>on</strong>e<br />

of the classificati<strong>on</strong>s which is most c<strong>on</strong>venient, and accepting the extra cost of<br />

processing in accordance with the other classificati<strong>on</strong>, even if this involves<br />

resorting the file. Thus the sparse mapping<br />

sparse array (i:D1 ; j" DE) of R<br />

is represented as:<br />

sparse array D1 of (sparse array D2 of R)

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!