automatically exploiting cross-invocation parallelism using runtime ...
automatically exploiting cross-invocation parallelism using runtime ... automatically exploiting cross-invocation parallelism using runtime ...
1.3 Dissertation OrganizationChapter 2 examines existing intra- and inter- invocation parallelization techniques, characterizingtheir applicability and scalability. This discussion motives DOMORE and SPEC-CROSS. Chapter 3 describes the design and implementation details of DOMORE, the firstnon-speculative runtime technique to exploit cross-invocation parallelism. DOMORE hasbeen published and presented in 2013 International Symposium on Code Generation andOptimization [28]. Chapter 4 introduces its speculative counterpart technique SPECCROSS.A quantitative evaluation of DOMORE and SPECCROSS is given in Chapter 5. Finally,Chapter 6 summarizes the conclusions of this dissertation and describes future avenues ofresearch.10
Chapter 2BackgroundImprecise and fragile static analyses limit the effectiveness of existing cross-invocationparallelization techniques. Addressing this vulnerability is the focus of this thesis. In thissection we explain the state of the art in automatic parallelization techniques. In Section2.1 we first identify the limitations of conventional analysis-based approaches to automaticparallelization. In Section 2.2 we provide a detailed discussion of current intra-invocationparallelization techniques, explaining how some of them compensate for the conservativenature of their analyses. Finally, in Section 2.3 we present existing cross-invocation parallelizationtechniques; in particular, how all of them rely on static analysis, which motivatesthe work of DOMORE and SPECCROSS.2.1 Limitations of Analysis-based Approaches in AutomaticParallelizationAutomatic parallelization is an ideal solution which frees programmers from the difficultiesof parallel programming and platform-specific performance tuning. Parallelizing compilerscan automatically parallelize affine loops [2, 7]. Loop A in Figure 2.1 shows an exampleloop. If a compiler proves that all memory variables in the body of the function foo do not11
- Page 1 and 2: AUTOMATICALLY EXPLOITINGCROSS-INVOC
- Page 4 and 5: techniques. Among twenty programs f
- Page 6 and 7: in particular. Their professionalis
- Page 8 and 9: ContentsAbstract . . . . . . . . .
- Page 10 and 11: 4.5.4 Load Balancing Techniques . .
- Page 12 and 13: 2.4 Sequential Loop Example for DOA
- Page 14 and 15: 4.5 Overview of SPECCROSS: At compi
- Page 16 and 17: 1.1 Limitations of Existing Approac
- Page 18 and 19: advanced forms of parallelism (MPI,
- Page 20 and 21: the graph stands for an iteration i
- Page 22 and 23: 1.2 ContributionsFigure 1.5 demonst
- Page 26 and 27: alias the array regular via inter-p
- Page 28 and 29: 1 for (i = 0; i < M; i++){2 node =
- Page 30 and 31: 1 cost = 0;2 node = list->head;3 Wh
- Page 32 and 33: example which cannot benefit from e
- Page 34 and 35: These techniques are referred to as
- Page 36 and 37: unnecessary overhead at runtime.Tab
- Page 38 and 39: Chapter 3Non-Speculatively Exploiti
- Page 40 and 41: for a variety of reasons. For insta
- Page 42 and 43: 12x11x10xDOMOREPthread Barrier9xLoo
- Page 44 and 45: Algorithm 1: Pseudo-code for schedu
- Page 46 and 47: Algorithm 2: Pseudo-code for worker
- Page 48 and 49: 3.3 Compiler ImplementationThe DOMO
- Page 50 and 51: to T i . DOMORE’s MTCG follows th
- Page 52 and 53: Outer_Preheaderbr BB1ABB1A:ind1 = P
- Page 54 and 55: Algorithm 3: Pseudo-code for genera
- Page 56 and 57: Scheduler Function SchedulerSync Fu
- Page 58 and 59: SchedulerWorker1 Worker2Worker3Work
- Page 60 and 61: 3.5 Related Work3.5.1 Cross-invocat
- Page 62 and 63: ations during the inspecting proces
- Page 64 and 65: for (t = 0; t < STEP; t++) {L1: for
- Page 66 and 67: sequential_func() {for (t = 0; t <
- Page 68 and 69: Workerthread 1Workerthread 2Workert
- Page 70 and 71: library provides efficient misspecu
- Page 72 and 73: Workerthread 1TimeFigure 4.6: Timin
Chapter 2BackgroundImprecise and fragile static analyses limit the effectiveness of existing <strong>cross</strong>-<strong>invocation</strong>parallelization techniques. Addressing this vulnerability is the focus of this thesis. In thissection we explain the state of the art in automatic parallelization techniques. In Section2.1 we first identify the limitations of conventional analysis-based approaches to automaticparallelization. In Section 2.2 we provide a detailed discussion of current intra-<strong>invocation</strong>parallelization techniques, explaining how some of them compensate for the conservativenature of their analyses. Finally, in Section 2.3 we present existing <strong>cross</strong>-<strong>invocation</strong> parallelizationtechniques; in particular, how all of them rely on static analysis, which motivatesthe work of DOMORE and SPECCROSS.2.1 Limitations of Analysis-based Approaches in AutomaticParallelizationAutomatic parallelization is an ideal solution which frees programmers from the difficultiesof parallel programming and platform-specific performance tuning. Parallelizing compilerscan <strong>automatically</strong> parallelize affine loops [2, 7]. Loop A in Figure 2.1 shows an exampleloop. If a compiler proves that all memory variables in the body of the function foo do not11