automatically exploiting cross-invocation parallelism using runtime ...
automatically exploiting cross-invocation parallelism using runtime ... automatically exploiting cross-invocation parallelism using runtime ...
3.3 Compiler ImplementationThe DOMORE compiler generates scalable parallel programs by exploiting both intrainvocationand cross-invocation parallelism. DOMORE first detects a candidate code regionwhich contains a large number of loop invocations. DOMORE currently targets loopnest whose outer loop cannot be efficiently parallelized because of frequent runtime dependences,and whose inner loop is invoked many times and can be parallelized easily. Foreach candidate loop nest, DOMORE generates parallel code for the scheduler and workerthreads. This section uses the example loop from CG (Figure 3.1) to demonstrate each stepof the code transformation. Figure 3.6(a) gives the pseudo IR code of the CG example.3.3.1 Partitioning Scheduler and WorkerDOMORE allows threads to execute iterations from consecutive parallel invocations. However,two parallel invocations do not necessarily execute consecutively; typically a sequentialregion exists between them. In CG’s loop, statement A, B and C belong to the sequentialregion. After removing the barriers, threads must execute these sequential regions beforestarting the iterations from next parallel invocation.DOMORE executes the sequential code in the scheduler thread. This provides a generalsolution to handle the sequential code enclosed by the outer loop. After partitioning, onlythe scheduler thread executes the code. There is no redundant computation and no need forspecial handling of side-effecting operations. If a data flow dependence exists between thescheduler and worker threads, the value can be forwarded to worker threads by the samequeues used to communicate synchronization conditions.The rule for partitioning code into worker and scheduler threads is straightforward. Theinner loop body is partitioned into two sections. The loop-traversal instructions belong tothe scheduler thread, and the inner loop body belongs to the worker thread. Instructionsoutside the inner loop but enclosed by the outer loop are treated as sequential code and thus34
elong to the scheduler.To decouple the execution of the scheduler and worker threads for better latency tolerance,they should communicate in a pipelined manner.Values are forwarded in onedirection, from the scheduler thread to the worker threads.The initial partition may not satisfy this pipeline requirement. To address this problem,DOMORE first builds a program dependence graph (PDG) for the target loop nest (Figure3.6(b)), including both cross-iteration and cross-invocation dependences for the innerloop. Then DOMORE groups the PDG nodes into strongly connected components (SCC)and creates a DAG SCC (Figure 3.6(c)) which is a directed acyclic graph for those SCCs.DOMORE goes through each SCC in DAG SCC : (1) If an SCC contains any instructionthat has been scheduled to the scheduler, all instructions in that SCC should be scheduled tothe scheduler partition. Otherwise, all instructions in that SCC are scheduled to the workerpartition; (2) If an SCC belonging to the worker partition causes a backedge towards anySCC belonging to the scheduler partition, that SCC should be re-partitioned tothe scheduler.Step (2) is repeated until both partitions converge.3.3.2 Generating Scheduler and Worker FunctionsAfter computing the instruction partition for scheduler and worker, DOMORE generatescode for scheduler and worker threads. Multi-Threaded Code Generation algorithm(MTCG) used by DOMORE builds upon the algorithm proposed in [51]. The major differenceis that [51] can only assign a whole inner loop invocation to one thread whileDOMORE can distribute iterations in the same invocation to different worker threads. Thefollowing description about DOMORE’s MTCG highlights the differences:1. 1. Compute the set of relevant basic blocks (BBs) for scheduler (T s ) and worker(T w ) threads. According to algorithm in [51]: A basic block is relevant to a thread T iif it contains either: (a) an instruction scheduled to T i ; or (b) an instruction on whichany of T i ’s instruction depends; or (c) a branch instruction that controls a relevant BB35
- Page 1 and 2: AUTOMATICALLY EXPLOITINGCROSS-INVOC
- Page 4 and 5: techniques. Among twenty programs f
- Page 6 and 7: in particular. Their professionalis
- Page 8 and 9: ContentsAbstract . . . . . . . . .
- Page 10 and 11: 4.5.4 Load Balancing Techniques . .
- Page 12 and 13: 2.4 Sequential Loop Example for DOA
- Page 14 and 15: 4.5 Overview of SPECCROSS: At compi
- Page 16 and 17: 1.1 Limitations of Existing Approac
- Page 18 and 19: advanced forms of parallelism (MPI,
- Page 20 and 21: the graph stands for an iteration i
- Page 22 and 23: 1.2 ContributionsFigure 1.5 demonst
- Page 24 and 25: 1.3 Dissertation OrganizationChapte
- Page 26 and 27: alias the array regular via inter-p
- Page 28 and 29: 1 for (i = 0; i < M; i++){2 node =
- Page 30 and 31: 1 cost = 0;2 node = list->head;3 Wh
- Page 32 and 33: example which cannot benefit from e
- Page 34 and 35: These techniques are referred to as
- Page 36 and 37: unnecessary overhead at runtime.Tab
- Page 38 and 39: Chapter 3Non-Speculatively Exploiti
- Page 40 and 41: for a variety of reasons. For insta
- Page 42 and 43: 12x11x10xDOMOREPthread Barrier9xLoo
- Page 44 and 45: Algorithm 1: Pseudo-code for schedu
- Page 46 and 47: Algorithm 2: Pseudo-code for worker
- Page 50 and 51: to T i . DOMORE’s MTCG follows th
- Page 52 and 53: Outer_Preheaderbr BB1ABB1A:ind1 = P
- Page 54 and 55: Algorithm 3: Pseudo-code for genera
- Page 56 and 57: Scheduler Function SchedulerSync Fu
- Page 58 and 59: SchedulerWorker1 Worker2Worker3Work
- Page 60 and 61: 3.5 Related Work3.5.1 Cross-invocat
- Page 62 and 63: ations during the inspecting proces
- Page 64 and 65: for (t = 0; t < STEP; t++) {L1: for
- Page 66 and 67: sequential_func() {for (t = 0; t <
- Page 68 and 69: Workerthread 1Workerthread 2Workert
- Page 70 and 71: library provides efficient misspecu
- Page 72 and 73: Workerthread 1TimeFigure 4.6: Timin
- Page 74 and 75: 4.2 SPECCROSS Runtime System4.2.1 M
- Page 76 and 77: takes up to 200MB memory space.To d
- Page 78 and 79: checkpoint, the child spawns new wo
- Page 80 and 81: Operation DescriptionFunctions for
- Page 82 and 83: Main thread:main() {init();create_t
- Page 84 and 85: implemented in the Liberty parallel
- Page 86 and 87: Algorithm 5: Pseudo-code for SPECCR
- Page 88 and 89: CROSS, since SPECCROSS can be appli
- Page 90 and 91: techniques.Synchronization via sche
- Page 92 and 93: Source Benchmark Function % of exec
- Page 94 and 95: applied to the outermost loop, gene
- Page 96 and 97: 5.2 SPECCROSS Performance Evaluatio
elong to the scheduler.To decouple the execution of the scheduler and worker threads for better latency tolerance,they should communicate in a pipelined manner.Values are forwarded in onedirection, from the scheduler thread to the worker threads.The initial partition may not satisfy this pipeline requirement. To address this problem,DOMORE first builds a program dependence graph (PDG) for the target loop nest (Figure3.6(b)), including both <strong>cross</strong>-iteration and <strong>cross</strong>-<strong>invocation</strong> dependences for the innerloop. Then DOMORE groups the PDG nodes into strongly connected components (SCC)and creates a DAG SCC (Figure 3.6(c)) which is a directed acyclic graph for those SCCs.DOMORE goes through each SCC in DAG SCC : (1) If an SCC contains any instructionthat has been scheduled to the scheduler, all instructions in that SCC should be scheduled tothe scheduler partition. Otherwise, all instructions in that SCC are scheduled to the workerpartition; (2) If an SCC belonging to the worker partition causes a backedge towards anySCC belonging to the scheduler partition, that SCC should be re-partitioned tothe scheduler.Step (2) is repeated until both partitions converge.3.3.2 Generating Scheduler and Worker FunctionsAfter computing the instruction partition for scheduler and worker, DOMORE generatescode for scheduler and worker threads. Multi-Threaded Code Generation algorithm(MTCG) used by DOMORE builds upon the algorithm proposed in [51]. The major differenceis that [51] can only assign a whole inner loop <strong>invocation</strong> to one thread whileDOMORE can distribute iterations in the same <strong>invocation</strong> to different worker threads. Thefollowing description about DOMORE’s MTCG highlights the differences:1. 1. Compute the set of relevant basic blocks (BBs) for scheduler (T s ) and worker(T w ) threads. According to algorithm in [51]: A basic block is relevant to a thread T iif it contains either: (a) an instruction scheduled to T i ; or (b) an instruction on whichany of T i ’s instruction depends; or (c) a branch instruction that controls a relevant BB35