automatically exploiting cross-invocation parallelism using runtime ...

automatically exploiting cross-invocation parallelism using runtime ... automatically exploiting cross-invocation parallelism using runtime ...

dataspace.princeton.edu
from dataspace.princeton.edu More from this publisher
13.07.2015 Views

1 for (i = 0; i < M; i++){2 node = Nodes[i];3 update(node);}T11.1T22.11 for (i = 0; i < M; i++){2 Nodes[i] = malloc();3 update(Nodes[i]);}T11.1T22.11 for (i = 0; i < M; i++){2 node = Nodes[index[i]];3 update(Nodes[i]);}T11.1T21.11.22.21.21.21.21.32.31.32.21.32.13.14.13.12.32.12.23.23.34.24.33.23.34.14.22.23.13.22.33.13.24.33.34.14.14.2(a) DOALL(b) DOANY4.24.3(c) LOCALWRITEFigure 2.3: Intra-invocation parallelization techniques which rely on static analysis (X.Yrefers to the Y th statement in the X th iteration of the loop): (a) DOALL concurrentlyexecutes iterations among threads and no inter-thread synchronization is necessary; (b)DOANY applies locks to guarantee atomic execution of function malloc; (c) LOCAL-WRITE goes through each node and each worker thread only updates the node belongingto itself.Parallelization techniques such as DOALL [1], DOANY [55, 75], LOCALWRITE [26],DOACROSS [15] and DSWP [51] belong to the first criteria. The applicability and scalabilityof these techniques fully depend on the quality of static analysis.DOALL parallelization can be applied to a loop where each iteration is independent ofall other loop iterations. Figure 2.3(a) illustrates such a DOALL loop. In the figure, X.Yrefers to the Y th statement in the X th iteration of the loop. DOALL loops are parallelizedby allocating sets of loop iterations to different threads. Although DOALL parallelizationoften yields quite scalable performance improvement, its applicability is limited. Inmost loops, dependences exist across loop iterations. Figure 2.3(b) shows a slightly dif-14

ferent loop. The cross-iteration dependence derived from malloc() to itself prohibitsDOALL parallelization. DOANY, instead, synchronizes these malloc() function callsusing locks. Locks guarantee only one thread can execute malloc() each time (as shownin Figure 2.3(b)). Locks enforces atomic execution but do not guarantee a specific executionorder of those malloc() calls. As a result, DOANY requires the protected operationsto be commutative. Figure 2.3(c) demonstrates another loop whose cross-iterationdependences are caused by irregular accesses to array elements (through an index array).DOANY fails to parallelize that loop since the execution order of update() matters. Inthis case, LOCALWRITE parallelization technique works if it can find a partition of theshared memory space which guarantees that each thread only accesses and updates thememory partition owned by itself. For this loop example, LOCALWRITE partitions theNode array into two sections and assigns each section to one of the worker threads. Eachworker thread executes all of the iterations, but before it executes statement 3, it checkswhether that node falls within its own memory partition. If it does not, the worker threadsimply skips statement 3 and starts executing the next iteration. LOCALWRITE’s performancegain is often limited by the redundant computation among threads (statements 1and 2 in this example). Meanwhile, a partition of the memory space is not always availableat compile time since the dependence patterns may be determined at runtime by specificinputs. Overall, these three parallelization techniques can only handle limited types ofcross-iteration dependences. They require the static analysis to prove that no other parallelizationprohibiting dependences exist.In contrast to these three techniques, DOACROSS and DSWP parallelization can handleany type of dependences. For example, consider the loop shown in Figure 2.4(a). Figure2.4(b) shows the program dependence graph (PDG) corresponding to the code. In thePDG, edges that participate in dependence cycles are shown as dashed lines. Since thestatements on lines 3 and 6 and the statement on line 5 each form a dependence cycle,each iteration is dependent on the previous one. The limitations of DOALL, DOANY15

ferent loop. The <strong>cross</strong>-iteration dependence derived from malloc() to itself prohibitsDOALL parallelization. DOANY, instead, synchronizes these malloc() function calls<strong>using</strong> locks. Locks guarantee only one thread can execute malloc() each time (as shownin Figure 2.3(b)). Locks enforces atomic execution but do not guarantee a specific executionorder of those malloc() calls. As a result, DOANY requires the protected operationsto be commutative. Figure 2.3(c) demonstrates another loop whose <strong>cross</strong>-iterationdependences are caused by irregular accesses to array elements (through an index array).DOANY fails to parallelize that loop since the execution order of update() matters. Inthis case, LOCALWRITE parallelization technique works if it can find a partition of theshared memory space which guarantees that each thread only accesses and updates thememory partition owned by itself. For this loop example, LOCALWRITE partitions theNode array into two sections and assigns each section to one of the worker threads. Eachworker thread executes all of the iterations, but before it executes statement 3, it checkswhether that node falls within its own memory partition. If it does not, the worker threadsimply skips statement 3 and starts executing the next iteration. LOCALWRITE’s performancegain is often limited by the redundant computation among threads (statements 1and 2 in this example). Meanwhile, a partition of the memory space is not always availableat compile time since the dependence patterns may be determined at <strong>runtime</strong> by specificinputs. Overall, these three parallelization techniques can only handle limited types of<strong>cross</strong>-iteration dependences. They require the static analysis to prove that no other parallelizationprohibiting dependences exist.In contrast to these three techniques, DOACROSS and DSWP parallelization can handleany type of dependences. For example, consider the loop shown in Figure 2.4(a). Figure2.4(b) shows the program dependence graph (PDG) corresponding to the code. In thePDG, edges that participate in dependence cycles are shown as dashed lines. Since thestatements on lines 3 and 6 and the statement on line 5 each form a dependence cycle,each iteration is dependent on the previous one. The limitations of DOALL, DOANY15

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!