automatically exploiting cross-invocation parallelism using runtime ...
automatically exploiting cross-invocation parallelism using runtime ... automatically exploiting cross-invocation parallelism using runtime ...
These techniques are referred to as speculative techniques. There have been many proposalsfor thread-level speculation (TLS) techniques which speculatively break various loopdependences [61, 42, 58]. Once these dependences are broken, effective parallelizationbecomes possible. Using the same loop in Figure 2.6, if TLS speculatively breaks the loopexit control dependences ( 2.7), assuming that the loop iterates many times, then the executionschedule shown in Figure 2.8(a) is possible. This parallelization offers a speedupof 4 over single threaded execution. Just as in TLS, by speculating the loop exit controldependence, the largest SCC is broken allowing SpecDSWP to deliver a speedup of 4 oversingle-threaded execution (as shown in Figure 2.8(b)).IE and speculative techniques take advantage of runtime information to more aggressivelyexploit parallelism. They are able to adapt to dependence patterns manifested atruntime by particular input data sets. Speculative techniques are best when dependencesrarely manifest as frequent misspeculation will lead to high recovery cost, which in turnnegates the benefit from parallelization. Non-speculative techniques, on the other hand, donot have the recovery cost, and thus are better choices for programs with more frequentmanifesting dependences.2.3 Cross-Invocation ParallelizationWhile intra-invocation parallelization techniques are adequate for programs with few loopinvocations, they are not adequate for programs with many loop invocations. All of thesetechniques parallelize independent loop invocations and use global synchronizations (barriers)to respect the dependences between loop invocations.However, global synchronizationsstall all of the threads, forcing them to wait for the last thread to arrive at thesynchronization point, causing inefficient utilization of processors and losing the potentialcross-invocation parallelism. Studies [45] have shown that in real world applications, synchronizationscan contribute as much as 61% to total program runtime. There is an oppor-20
tunity to improve performance: iterations from different loop invocations can often executeconcurrently without violating program semantics. Instead of waiting at synchronizationpoints, threads can execute iterations from subsequent loop invocations. This additionalcross-invocation parallelism can improve the processor utilization and help the parallelprograms achieve much better scalability.Prior work has presented some automatic parallelization techniques that exploit crossinvocationparallelism. Ferrero et al. [22] apply affine techniques to aggregate small parallelloops into large ones so that intra-invocation parallelization techniques can be applied directly.Bodin [50] applies communication analysis to analyze the data-flow between eachthreads. Instead of inserting global synchronizations between invocations, they apply pairwisesynchronization between threads if data-flow exits between them. These more finegrainedsynchronization techniques allow for more potential parallelization at runtime sinceonly some of the threads have to stall and wait while the others can proceed across the invocationboundary. Tseng [72] partitions iterations within the same loop invocation so thatcross-invocation dependences flow within the same working thread. However, all thesetechniques share the same limitation as the first group of intra-invocation parallelizationtechniques: they all rely on static analysis and cannot handle input-dependent irregularprogram patterns.Some attempts have been made to exploit parallelism beyond the scope of loop invocationusing runtime information. BOP [19] allows programmers to specify all potentialconcurrent code regions in a sequential program. Each of these concurrent task is treatedas a transaction and will be protected from access violations at runtime. TCC [25] requiresprogrammers to annotate the parallel version of program. The programmer can specify theboundary of each transaction and assign each transaction a phase-number to guarantee thecorrect commit order of each transaction. However, these two techniques both require manualannotation or manual parallelization by programmers. Additionally, these techniquesdo not distinguish intra- and inter-invocation dependences, which means they may incur21
- Page 1 and 2: AUTOMATICALLY EXPLOITINGCROSS-INVOC
- Page 4 and 5: techniques. Among twenty programs f
- Page 6 and 7: in particular. Their professionalis
- Page 8 and 9: ContentsAbstract . . . . . . . . .
- Page 10 and 11: 4.5.4 Load Balancing Techniques . .
- Page 12 and 13: 2.4 Sequential Loop Example for DOA
- Page 14 and 15: 4.5 Overview of SPECCROSS: At compi
- Page 16 and 17: 1.1 Limitations of Existing Approac
- Page 18 and 19: advanced forms of parallelism (MPI,
- Page 20 and 21: the graph stands for an iteration i
- Page 22 and 23: 1.2 ContributionsFigure 1.5 demonst
- Page 24 and 25: 1.3 Dissertation OrganizationChapte
- Page 26 and 27: alias the array regular via inter-p
- Page 28 and 29: 1 for (i = 0; i < M; i++){2 node =
- Page 30 and 31: 1 cost = 0;2 node = list->head;3 Wh
- Page 32 and 33: example which cannot benefit from e
- Page 36 and 37: unnecessary overhead at runtime.Tab
- Page 38 and 39: Chapter 3Non-Speculatively Exploiti
- Page 40 and 41: for a variety of reasons. For insta
- Page 42 and 43: 12x11x10xDOMOREPthread Barrier9xLoo
- Page 44 and 45: Algorithm 1: Pseudo-code for schedu
- Page 46 and 47: Algorithm 2: Pseudo-code for worker
- Page 48 and 49: 3.3 Compiler ImplementationThe DOMO
- Page 50 and 51: to T i . DOMORE’s MTCG follows th
- Page 52 and 53: Outer_Preheaderbr BB1ABB1A:ind1 = P
- Page 54 and 55: Algorithm 3: Pseudo-code for genera
- Page 56 and 57: Scheduler Function SchedulerSync Fu
- Page 58 and 59: SchedulerWorker1 Worker2Worker3Work
- Page 60 and 61: 3.5 Related Work3.5.1 Cross-invocat
- Page 62 and 63: ations during the inspecting proces
- Page 64 and 65: for (t = 0; t < STEP; t++) {L1: for
- Page 66 and 67: sequential_func() {for (t = 0; t <
- Page 68 and 69: Workerthread 1Workerthread 2Workert
- Page 70 and 71: library provides efficient misspecu
- Page 72 and 73: Workerthread 1TimeFigure 4.6: Timin
- Page 74 and 75: 4.2 SPECCROSS Runtime System4.2.1 M
- Page 76 and 77: takes up to 200MB memory space.To d
- Page 78 and 79: checkpoint, the child spawns new wo
- Page 80 and 81: Operation DescriptionFunctions for
- Page 82 and 83: Main thread:main() {init();create_t
tunity to improve performance: iterations from different loop <strong>invocation</strong>s can often executeconcurrently without violating program semantics. Instead of waiting at synchronizationpoints, threads can execute iterations from subsequent loop <strong>invocation</strong>s. This additional<strong>cross</strong>-<strong>invocation</strong> <strong>parallelism</strong> can improve the processor utilization and help the parallelprograms achieve much better scalability.Prior work has presented some automatic parallelization techniques that exploit <strong>cross</strong><strong>invocation</strong><strong>parallelism</strong>. Ferrero et al. [22] apply affine techniques to aggregate small parallelloops into large ones so that intra-<strong>invocation</strong> parallelization techniques can be applied directly.Bodin [50] applies communication analysis to analyze the data-flow between eachthreads. Instead of inserting global synchronizations between <strong>invocation</strong>s, they apply pairwisesynchronization between threads if data-flow exits between them. These more finegrainedsynchronization techniques allow for more potential parallelization at <strong>runtime</strong> sinceonly some of the threads have to stall and wait while the others can proceed a<strong>cross</strong> the <strong>invocation</strong>boundary. Tseng [72] partitions iterations within the same loop <strong>invocation</strong> so that<strong>cross</strong>-<strong>invocation</strong> dependences flow within the same working thread. However, all thesetechniques share the same limitation as the first group of intra-<strong>invocation</strong> parallelizationtechniques: they all rely on static analysis and cannot handle input-dependent irregularprogram patterns.Some attempts have been made to exploit <strong>parallelism</strong> beyond the scope of loop <strong>invocation</strong><strong>using</strong> <strong>runtime</strong> information. BOP [19] allows programmers to specify all potentialconcurrent code regions in a sequential program. Each of these concurrent task is treatedas a transaction and will be protected from access violations at <strong>runtime</strong>. TCC [25] requiresprogrammers to annotate the parallel version of program. The programmer can specify theboundary of each transaction and assign each transaction a phase-number to guarantee thecorrect commit order of each transaction. However, these two techniques both require manualannotation or manual parallelization by programmers. Additionally, these techniquesdo not distinguish intra- and inter-<strong>invocation</strong> dependences, which means they may incur21