automatically exploiting cross-invocation parallelism using runtime ...
automatically exploiting cross-invocation parallelism using runtime ... automatically exploiting cross-invocation parallelism using runtime ...
1.2 ContributionsFigure 1.5 demonstrates the contribution of this thesis work compared to prior works. Thisthesis work presents two novel automatic parallelization techniques (DOMORE and SPEC-CROSS) that capture dynamic cross-invocation parallelism. Unlike existing techniques,DOMORE and SPECCROSS gather cross-invocation dependence information at runtime.Even for programs with irregular dependence patterns, DOMORE and SPECCROSS synchronizeiterations which depend on each other and allow iterations without dependencesto execute concurrently. As a result, they are able to enable more cross-invocation parallelizationand achieves more scalable performance.As a non-speculative technique, DOMORE first identifies the code region containingthe targeted loop invocations, and then transforms the program by dividing the region intoa scheduler thread and several worker threads. The scheduler thread contains code to detectmemory access conflicts between loop iterations and code to schedule and dispatch loopiterations from different loop invocations to worker threads. In order to detect access violations,the scheduler duplicates loops the instructions used for calculating the addressesof memory locations to be accessed in each loop iteration. As a result, at runtime, it knowswhich iterations access the common memory locations and then coordinates the executionof these conflicting iterations by generating and forwarding synchronization conditions tothe worker threads. A synchronization condition tells the worker thread to wait until anotherworker thread finishes executing the conflicting iteration. Consequently, only threadswaiting on the synchronization conditions must stall, and iterations from consecutive loopinvocations may execute in parallel.SPECCROSS parallelizes independent loops and replaces the barrier synchronizationbetween two loop invocations with its speculative counterpart.Unlike non-speculativebarriers which pessimistically synchronize to enforce dependences, speculative techniquesallow threads to execute past barriers without stalling. Speculation allows programs to optimisticallyexecute potentially dependent instructions and later check for misspeculation.8
LLAutomaticallyLExploitingLCrossFinvocationLParallelismNoYesLUsingLRuntimeLInformationNoYesLUsingLRuntimeLInformationNoYesDOALLL[1]LDOANYL[55]DOACROSSL[15]LOCALWRITEL[26]DSWPL[51]NoIEL[53]SpeculativeYesLRPDL[61]SMTLiteL[42]LoopLFusionL[22]SpeculativeTsengL[72] No YesDOMORESpecCrossThisLThesisLWorkFigure 1.5: Contribution of this thesis workIf misspeculation occurs, the program recovers using checkpointed non-speculative state.Speculative barriers improve performance by synchronizing only on misspeculation.The two techniques are complimentary in the sense that they can parallelize programswith potentially very different characteristics. SPECCROSS, with less runtime overhead,works best when programs’ cross-invocation dependences seldom cause any runtime conflict.While DOMORE has its advantage in handling dependences which cause frequentconflicts. Implementation and evaluation demonstrate that both techniques can achievemuch better scalability compared to existing automatic parallelization techniques. Amongtwenty programs from seven benchmark suites, DOMORE is automatically applied to parallelizesix of them and achieves a geomean speedup of 2.1× over codes without crossinvocationparallelization and 3.2× over the original sequential performance on 24 cores.SPECCROSS is found to be applicable to eight of the programs and it achieves a geomeanspeedup of 4.6× over the best sequential execution, which compares favorably to a 1.3×speedup obtained by parallel execution without any cross-invocation parallelization.9
- Page 1 and 2: AUTOMATICALLY EXPLOITINGCROSS-INVOC
- Page 4 and 5: techniques. Among twenty programs f
- Page 6 and 7: in particular. Their professionalis
- Page 8 and 9: ContentsAbstract . . . . . . . . .
- Page 10 and 11: 4.5.4 Load Balancing Techniques . .
- Page 12 and 13: 2.4 Sequential Loop Example for DOA
- Page 14 and 15: 4.5 Overview of SPECCROSS: At compi
- Page 16 and 17: 1.1 Limitations of Existing Approac
- Page 18 and 19: advanced forms of parallelism (MPI,
- Page 20 and 21: the graph stands for an iteration i
- Page 24 and 25: 1.3 Dissertation OrganizationChapte
- Page 26 and 27: alias the array regular via inter-p
- Page 28 and 29: 1 for (i = 0; i < M; i++){2 node =
- Page 30 and 31: 1 cost = 0;2 node = list->head;3 Wh
- Page 32 and 33: example which cannot benefit from e
- Page 34 and 35: These techniques are referred to as
- Page 36 and 37: unnecessary overhead at runtime.Tab
- Page 38 and 39: Chapter 3Non-Speculatively Exploiti
- Page 40 and 41: for a variety of reasons. For insta
- Page 42 and 43: 12x11x10xDOMOREPthread Barrier9xLoo
- Page 44 and 45: Algorithm 1: Pseudo-code for schedu
- Page 46 and 47: Algorithm 2: Pseudo-code for worker
- Page 48 and 49: 3.3 Compiler ImplementationThe DOMO
- Page 50 and 51: to T i . DOMORE’s MTCG follows th
- Page 52 and 53: Outer_Preheaderbr BB1ABB1A:ind1 = P
- Page 54 and 55: Algorithm 3: Pseudo-code for genera
- Page 56 and 57: Scheduler Function SchedulerSync Fu
- Page 58 and 59: SchedulerWorker1 Worker2Worker3Work
- Page 60 and 61: 3.5 Related Work3.5.1 Cross-invocat
- Page 62 and 63: ations during the inspecting proces
- Page 64 and 65: for (t = 0; t < STEP; t++) {L1: for
- Page 66 and 67: sequential_func() {for (t = 0; t <
- Page 68 and 69: Workerthread 1Workerthread 2Workert
- Page 70 and 71: library provides efficient misspecu
LLAutomaticallyLExploitingLCrossF<strong>invocation</strong>LParallelismNoYesLUsingLRuntimeLInformationNoYesLUsingLRuntimeLInformationNoYesDOALLL[1]LDOANYL[55]DOACROSSL[15]LOCALWRITEL[26]DSWPL[51]NoIEL[53]SpeculativeYesLRPDL[61]SMTLiteL[42]LoopLFusionL[22]SpeculativeTsengL[72] No YesDOMORESpecCrossThisLThesisLWorkFigure 1.5: Contribution of this thesis workIf misspeculation occurs, the program recovers <strong>using</strong> checkpointed non-speculative state.Speculative barriers improve performance by synchronizing only on misspeculation.The two techniques are complimentary in the sense that they can parallelize programswith potentially very different characteristics. SPECCROSS, with less <strong>runtime</strong> overhead,works best when programs’ <strong>cross</strong>-<strong>invocation</strong> dependences seldom cause any <strong>runtime</strong> conflict.While DOMORE has its advantage in handling dependences which cause frequentconflicts. Implementation and evaluation demonstrate that both techniques can achievemuch better scalability compared to existing automatic parallelization techniques. Amongtwenty programs from seven benchmark suites, DOMORE is <strong>automatically</strong> applied to parallelizesix of them and achieves a geomean speedup of 2.1× over codes without <strong>cross</strong><strong>invocation</strong>parallelization and 3.2× over the original sequential performance on 24 cores.SPECCROSS is found to be applicable to eight of the programs and it achieves a geomeanspeedup of 4.6× over the best sequential execution, which compares favorably to a 1.3×speedup obtained by parallel execution without any <strong>cross</strong>-<strong>invocation</strong> parallelization.9