automatically exploiting cross-invocation parallelism using runtime ...
automatically exploiting cross-invocation parallelism using runtime ... automatically exploiting cross-invocation parallelism using runtime ...
1 for (i = 0; i < M; i++){2 node = Nodes[i];3 update(node);}T11.1T22.11 for (i = 0; i < M; i++){2 Nodes[i] = malloc();3 update(Nodes[i]);}T11.1T22.11 for (i = 0; i < M; i++){2 node = Nodes[index[i]];3 update(Nodes[i]);}T11.1T21.11.22.21.21.21.21.32.31.32.21.32.13.14.13.12.32.12.23.23.34.24.33.23.34.14.22.23.13.22.33.13.24.33.34.14.14.2(a) DOALL(b) DOANY4.24.3(c) LOCALWRITEFigure 2.3: Intra-invocation parallelization techniques which rely on static analysis (X.Yrefers to the Y th statement in the X th iteration of the loop): (a) DOALL concurrentlyexecutes iterations among threads and no inter-thread synchronization is necessary; (b)DOANY applies locks to guarantee atomic execution of function malloc; (c) LOCAL-WRITE goes through each node and each worker thread only updates the node belongingto itself.Parallelization techniques such as DOALL [1], DOANY [55, 75], LOCALWRITE [26],DOACROSS [15] and DSWP [51] belong to the first criteria. The applicability and scalabilityof these techniques fully depend on the quality of static analysis.DOALL parallelization can be applied to a loop where each iteration is independent ofall other loop iterations. Figure 2.3(a) illustrates such a DOALL loop. In the figure, X.Yrefers to the Y th statement in the X th iteration of the loop. DOALL loops are parallelizedby allocating sets of loop iterations to different threads. Although DOALL parallelizationoften yields quite scalable performance improvement, its applicability is limited. Inmost loops, dependences exist across loop iterations. Figure 2.3(b) shows a slightly dif-14
ferent loop. The cross-iteration dependence derived from malloc() to itself prohibitsDOALL parallelization. DOANY, instead, synchronizes these malloc() function callsusing locks. Locks guarantee only one thread can execute malloc() each time (as shownin Figure 2.3(b)). Locks enforces atomic execution but do not guarantee a specific executionorder of those malloc() calls. As a result, DOANY requires the protected operationsto be commutative. Figure 2.3(c) demonstrates another loop whose cross-iterationdependences are caused by irregular accesses to array elements (through an index array).DOANY fails to parallelize that loop since the execution order of update() matters. Inthis case, LOCALWRITE parallelization technique works if it can find a partition of theshared memory space which guarantees that each thread only accesses and updates thememory partition owned by itself. For this loop example, LOCALWRITE partitions theNode array into two sections and assigns each section to one of the worker threads. Eachworker thread executes all of the iterations, but before it executes statement 3, it checkswhether that node falls within its own memory partition. If it does not, the worker threadsimply skips statement 3 and starts executing the next iteration. LOCALWRITE’s performancegain is often limited by the redundant computation among threads (statements 1and 2 in this example). Meanwhile, a partition of the memory space is not always availableat compile time since the dependence patterns may be determined at runtime by specificinputs. Overall, these three parallelization techniques can only handle limited types ofcross-iteration dependences. They require the static analysis to prove that no other parallelizationprohibiting dependences exist.In contrast to these three techniques, DOACROSS and DSWP parallelization can handleany type of dependences. For example, consider the loop shown in Figure 2.4(a). Figure2.4(b) shows the program dependence graph (PDG) corresponding to the code. In thePDG, edges that participate in dependence cycles are shown as dashed lines. Since thestatements on lines 3 and 6 and the statement on line 5 each form a dependence cycle,each iteration is dependent on the previous one. The limitations of DOALL, DOANY15
- Page 1 and 2: AUTOMATICALLY EXPLOITINGCROSS-INVOC
- Page 4 and 5: techniques. Among twenty programs f
- Page 6 and 7: in particular. Their professionalis
- Page 8 and 9: ContentsAbstract . . . . . . . . .
- Page 10 and 11: 4.5.4 Load Balancing Techniques . .
- Page 12 and 13: 2.4 Sequential Loop Example for DOA
- Page 14 and 15: 4.5 Overview of SPECCROSS: At compi
- Page 16 and 17: 1.1 Limitations of Existing Approac
- Page 18 and 19: advanced forms of parallelism (MPI,
- Page 20 and 21: the graph stands for an iteration i
- Page 22 and 23: 1.2 ContributionsFigure 1.5 demonst
- Page 24 and 25: 1.3 Dissertation OrganizationChapte
- Page 26 and 27: alias the array regular via inter-p
- Page 30 and 31: 1 cost = 0;2 node = list->head;3 Wh
- Page 32 and 33: example which cannot benefit from e
- Page 34 and 35: These techniques are referred to as
- Page 36 and 37: unnecessary overhead at runtime.Tab
- Page 38 and 39: Chapter 3Non-Speculatively Exploiti
- Page 40 and 41: for a variety of reasons. For insta
- Page 42 and 43: 12x11x10xDOMOREPthread Barrier9xLoo
- Page 44 and 45: Algorithm 1: Pseudo-code for schedu
- Page 46 and 47: Algorithm 2: Pseudo-code for worker
- Page 48 and 49: 3.3 Compiler ImplementationThe DOMO
- Page 50 and 51: to T i . DOMORE’s MTCG follows th
- Page 52 and 53: Outer_Preheaderbr BB1ABB1A:ind1 = P
- Page 54 and 55: Algorithm 3: Pseudo-code for genera
- Page 56 and 57: Scheduler Function SchedulerSync Fu
- Page 58 and 59: SchedulerWorker1 Worker2Worker3Work
- Page 60 and 61: 3.5 Related Work3.5.1 Cross-invocat
- Page 62 and 63: ations during the inspecting proces
- Page 64 and 65: for (t = 0; t < STEP; t++) {L1: for
- Page 66 and 67: sequential_func() {for (t = 0; t <
- Page 68 and 69: Workerthread 1Workerthread 2Workert
- Page 70 and 71: library provides efficient misspecu
- Page 72 and 73: Workerthread 1TimeFigure 4.6: Timin
- Page 74 and 75: 4.2 SPECCROSS Runtime System4.2.1 M
- Page 76 and 77: takes up to 200MB memory space.To d
ferent loop. The <strong>cross</strong>-iteration dependence derived from malloc() to itself prohibitsDOALL parallelization. DOANY, instead, synchronizes these malloc() function calls<strong>using</strong> locks. Locks guarantee only one thread can execute malloc() each time (as shownin Figure 2.3(b)). Locks enforces atomic execution but do not guarantee a specific executionorder of those malloc() calls. As a result, DOANY requires the protected operationsto be commutative. Figure 2.3(c) demonstrates another loop whose <strong>cross</strong>-iterationdependences are caused by irregular accesses to array elements (through an index array).DOANY fails to parallelize that loop since the execution order of update() matters. Inthis case, LOCALWRITE parallelization technique works if it can find a partition of theshared memory space which guarantees that each thread only accesses and updates thememory partition owned by itself. For this loop example, LOCALWRITE partitions theNode array into two sections and assigns each section to one of the worker threads. Eachworker thread executes all of the iterations, but before it executes statement 3, it checkswhether that node falls within its own memory partition. If it does not, the worker threadsimply skips statement 3 and starts executing the next iteration. LOCALWRITE’s performancegain is often limited by the redundant computation among threads (statements 1and 2 in this example). Meanwhile, a partition of the memory space is not always availableat compile time since the dependence patterns may be determined at <strong>runtime</strong> by specificinputs. Overall, these three parallelization techniques can only handle limited types of<strong>cross</strong>-iteration dependences. They require the static analysis to prove that no other parallelizationprohibiting dependences exist.In contrast to these three techniques, DOACROSS and DSWP parallelization can handleany type of dependences. For example, consider the loop shown in Figure 2.4(a). Figure2.4(b) shows the program dependence graph (PDG) corresponding to the code. In thePDG, edges that participate in dependence cycles are shown as dashed lines. Since thestatements on lines 3 and 6 and the statement on line 5 each form a dependence cycle,each iteration is dependent on the previous one. The limitations of DOALL, DOANY15