automatically exploiting cross-invocation parallelism using runtime ...
automatically exploiting cross-invocation parallelism using runtime ... automatically exploiting cross-invocation parallelism using runtime ...
1 cost = 0;2 node = list->head;3 While(node) {4 ncost = doit(node);5 cost += ncost;6 node = node->next;7 }3 46 5(a) Loop with cross‐iterationdependences(b) PDGFigure 2.4: Sequential Loop Example for DOACROSS and DSWPThread 1 Thread 23.1Thread 1 Thread 23.16.16.14.14.13.23.25.15.16.26.24.23.34.23.35.26.35.26.3 4.3(a) DOACROSS(b) DSWPFigure 2.5: Parallelization Execution Plan for DOACROSS and DSWP16
1 cost = 0;2 node = list->head;3 while(cost < T && node) {4 ncost = doit(node);5 cost += ncost;6 node = node->next;7 }3 46 5(a) Loop cannot benefit fromDOACROSS or DSWP(b) PDGFigure 2.6: Example Loop which cannot be parallelized by DOACROSS or DSWPand LOCALWRITE prevent any of them from parallelizing this loop. Similar to DOALL,DOACROSS parallelizes this loop by assigning sets of loop iterations to different threads.However, since each iteration is dependent on the previous one, later iterations synchronizewith earlier ones waiting for the cross-iteration dependences to be satisfied.Thecode between synchronizations can run in parallel with code executing in other threads(Figure 2.5(a)). Using the same loop, we demonstrate DSWP technique in Figure 2.5(b).Unlike DOALL and DOACROSS parallelization, DSWP does not allocate entire loop iterationsto threads. Instead, each thread executes a portion of all loop iterations. The piecesare selected such that the threads form a pipeline. In Figure 2.5(b), thread 1 is responsiblefor executing statements 3 and 6 for all iterations of the loop, and thread 2 is responsiblefor executing statements 4 and 5 for all iterations of the loop. Since statement 4 and 5do not feed statements 3 and 6, all cross-thread dependences flow from thread 1 to thread2 forming a thread pipeline. Although neither techniques is limited by the type of dependences,the resulting parallel programs’ performance highly rely on the quality of thestatic analysis. For DOACROSS, excessive dependences lead to frequent synchronizationacross threads, limiting actual parallelism or even resulting in sequential execution of theprogram. For DSWP, conservative analysis results lead to unbalanced stage partitions orsmall parallel partition, which in turn limit the performance gain. Figure 2.6 shows a loop17
- Page 1 and 2: AUTOMATICALLY EXPLOITINGCROSS-INVOC
- Page 4 and 5: techniques. Among twenty programs f
- Page 6 and 7: in particular. Their professionalis
- Page 8 and 9: ContentsAbstract . . . . . . . . .
- Page 10 and 11: 4.5.4 Load Balancing Techniques . .
- Page 12 and 13: 2.4 Sequential Loop Example for DOA
- Page 14 and 15: 4.5 Overview of SPECCROSS: At compi
- Page 16 and 17: 1.1 Limitations of Existing Approac
- Page 18 and 19: advanced forms of parallelism (MPI,
- Page 20 and 21: the graph stands for an iteration i
- Page 22 and 23: 1.2 ContributionsFigure 1.5 demonst
- Page 24 and 25: 1.3 Dissertation OrganizationChapte
- Page 26 and 27: alias the array regular via inter-p
- Page 28 and 29: 1 for (i = 0; i < M; i++){2 node =
- Page 32 and 33: example which cannot benefit from e
- Page 34 and 35: These techniques are referred to as
- Page 36 and 37: unnecessary overhead at runtime.Tab
- Page 38 and 39: Chapter 3Non-Speculatively Exploiti
- Page 40 and 41: for a variety of reasons. For insta
- Page 42 and 43: 12x11x10xDOMOREPthread Barrier9xLoo
- Page 44 and 45: Algorithm 1: Pseudo-code for schedu
- Page 46 and 47: Algorithm 2: Pseudo-code for worker
- Page 48 and 49: 3.3 Compiler ImplementationThe DOMO
- Page 50 and 51: to T i . DOMORE’s MTCG follows th
- Page 52 and 53: Outer_Preheaderbr BB1ABB1A:ind1 = P
- Page 54 and 55: Algorithm 3: Pseudo-code for genera
- Page 56 and 57: Scheduler Function SchedulerSync Fu
- Page 58 and 59: SchedulerWorker1 Worker2Worker3Work
- Page 60 and 61: 3.5 Related Work3.5.1 Cross-invocat
- Page 62 and 63: ations during the inspecting proces
- Page 64 and 65: for (t = 0; t < STEP; t++) {L1: for
- Page 66 and 67: sequential_func() {for (t = 0; t <
- Page 68 and 69: Workerthread 1Workerthread 2Workert
- Page 70 and 71: library provides efficient misspecu
- Page 72 and 73: Workerthread 1TimeFigure 4.6: Timin
- Page 74 and 75: 4.2 SPECCROSS Runtime System4.2.1 M
- Page 76 and 77: takes up to 200MB memory space.To d
- Page 78 and 79: checkpoint, the child spawns new wo
1 cost = 0;2 node = list->head;3 While(node) {4 ncost = doit(node);5 cost += ncost;6 node = node->next;7 }3 46 5(a) Loop with <strong>cross</strong>‐iterationdependences(b) PDGFigure 2.4: Sequential Loop Example for DOACROSS and DSWPThread 1 Thread 23.1Thread 1 Thread 23.16.16.14.14.13.23.25.15.16.26.24.23.34.23.35.26.35.26.3 4.3(a) DOACROSS(b) DSWPFigure 2.5: Parallelization Execution Plan for DOACROSS and DSWP16