automatically exploiting cross-invocation parallelism using runtime ...
automatically exploiting cross-invocation parallelism using runtime ... automatically exploiting cross-invocation parallelism using runtime ...
library provides efficient misspeculation detection. In order to check for misspeculation,all threads periodically send memory access signatures to a dedicated checker thread. Amemory access signature summarizes which bytes in memory have been accessed since thelast check. The checker thread uses signatures to verify that speculative execution respectsall memory dependences. If the checker thread detects misspeculation, it kills all the speculativethreads and resumes execution from the last checkpoint. If two threads calculateand send signatures while they are executing in code regions guaranteed to be independent,these signatures will be safely skipped to avoid unnecessary checking overhead.To avoid high penalty from misspeculation, SPECCROSS uses a profiler to determinethe speculative range, which controls how aggressively to speculate. The speculative rangeis the distance between a thread with a high epoch number and thread with a low epochnumber. In SPECCROSS, an epoch number counts the number of barriers each threadhas passed. Non-speculative barriers ensure that all threads have the same epoch number,whereas speculative barriers allow each thread’s epoch number to increase independently.When the difference of two threads’ epoch numbers is zero, misspeculation is impossible,since nothing is speculated. When the difference is high, misspeculation is more likely,since more intervening memory accesses are speculated. The SPECCROSS runtime systemuses profiling data to compute a limit for the speculative range. When the program reachesthe limit, SPECCROSS stalls the thread with the higher epoch number until the thread withthe lower epoch number has caught up. Choosing the correct limit keeps misspeculationrates low.Figure 4.5 gives an overview of the SPECCROSS system. The SPECCROSS compilertakes a sequential program as input and detects a SPECCROSS code region as the transformationtarget. A SPECCROSS code region is often an outermost loop composed of consecutiveparallelizable inner loops. SPECCROSS parallelizes each independent inner loop andthen inserts SPECCROSS library function calls to enable cross-invocation parallelization.At runtime, the original process spawns multiple worker threads and a checker thread for56
InputSequential ProgramCompile TimeDetect Target Loop NestsParallelize IndependentLoop InvocationsInsert Calls to SpecCross Libraryconfigurespeculative rangeProfileDependenceDistanceOutputParallel ProgramRuntimeWorkerWorkerWorker ThreadcheckingrequestChecker Threadoriginal ProcessforkmisspeculationkillCheckpointProcessFigure 4.5: Overview of SPECCROSS: At compile time, the SPECCROSS compiler detectscode regions composed of consecutive parallel loop invocations, parallelizes the code regionand inserts SPECCROSS library functions to enable barrier speculation. At runtime,the whole program is first executed speculatively without barriers. Once misspeculationoccurs, the checkpoint process is woken up. It kills the original child process and spawnsnew worker threads. The worker threads will re-execute the misspeculated epochs withnon-speculative barriers.misspeculation detection. Each worker thread executes its code section and sends requestsfor misspeculation detection to the checker thread. The checker thread detects dependenceviolations using access signatures computed by each worker thread. When the checkerthread detects a violation, it signals a separate checkpoint process for misspeculation recovery.The checkpoint process is periodically forked from the original process. Oncerecovery is completed by squashing all speculative workers and restoring state to the lastcheckpoint, misspeculated epochs are re-executed with non-speculative barriers.57
- Page 20 and 21: the graph stands for an iteration i
- Page 22 and 23: 1.2 ContributionsFigure 1.5 demonst
- Page 24 and 25: 1.3 Dissertation OrganizationChapte
- Page 26 and 27: alias the array regular via inter-p
- Page 28 and 29: 1 for (i = 0; i < M; i++){2 node =
- Page 30 and 31: 1 cost = 0;2 node = list->head;3 Wh
- Page 32 and 33: example which cannot benefit from e
- Page 34 and 35: These techniques are referred to as
- Page 36 and 37: unnecessary overhead at runtime.Tab
- Page 38 and 39: Chapter 3Non-Speculatively Exploiti
- Page 40 and 41: for a variety of reasons. For insta
- Page 42 and 43: 12x11x10xDOMOREPthread Barrier9xLoo
- Page 44 and 45: Algorithm 1: Pseudo-code for schedu
- Page 46 and 47: Algorithm 2: Pseudo-code for worker
- Page 48 and 49: 3.3 Compiler ImplementationThe DOMO
- Page 50 and 51: to T i . DOMORE’s MTCG follows th
- Page 52 and 53: Outer_Preheaderbr BB1ABB1A:ind1 = P
- Page 54 and 55: Algorithm 3: Pseudo-code for genera
- Page 56 and 57: Scheduler Function SchedulerSync Fu
- Page 58 and 59: SchedulerWorker1 Worker2Worker3Work
- Page 60 and 61: 3.5 Related Work3.5.1 Cross-invocat
- Page 62 and 63: ations during the inspecting proces
- Page 64 and 65: for (t = 0; t < STEP; t++) {L1: for
- Page 66 and 67: sequential_func() {for (t = 0; t <
- Page 68 and 69: Workerthread 1Workerthread 2Workert
- Page 72 and 73: Workerthread 1TimeFigure 4.6: Timin
- Page 74 and 75: 4.2 SPECCROSS Runtime System4.2.1 M
- Page 76 and 77: takes up to 200MB memory space.To d
- Page 78 and 79: checkpoint, the child spawns new wo
- Page 80 and 81: Operation DescriptionFunctions for
- Page 82 and 83: Main thread:main() {init();create_t
- Page 84 and 85: implemented in the Liberty parallel
- Page 86 and 87: Algorithm 5: Pseudo-code for SPECCR
- Page 88 and 89: CROSS, since SPECCROSS can be appli
- Page 90 and 91: techniques.Synchronization via sche
- Page 92 and 93: Source Benchmark Function % of exec
- Page 94 and 95: applied to the outermost loop, gene
- Page 96 and 97: 5.2 SPECCROSS Performance Evaluatio
- Page 98 and 99: and the number of checking requests
- Page 100 and 101: 8x7xno misspec.with misspec.Geomean
- Page 102 and 103: This thesisPrevious workSpeedup (x)
- Page 104 and 105: Program Speedup6x5x4x3x2xLOCALWRITE
- Page 106 and 107: for DOMORE and SPECCROSS. Others (e
- Page 108 and 109: programs and it achieves a geomean
- Page 110 and 111: Bibliography[1] R. Allen and K. Ken
- Page 112 and 113: [15] R. Cytron. DOACROSS: Beyond ve
- Page 114 and 115: [31] T. B. Jablin, Y. Zhang, J. A.
- Page 116 and 117: [47] A. Nicolau, G. Li, A. V. Veide
- Page 118 and 119: [62] L. Rauchwerger and D. Padua. T
InputSequential ProgramCompile TimeDetect Target Loop NestsParallelize IndependentLoop InvocationsInsert Calls to SpecCross Libraryconfigurespeculative rangeProfileDependenceDistanceOutputParallel ProgramRuntimeWorkerWorkerWorker ThreadcheckingrequestChecker Threadoriginal ProcessforkmisspeculationkillCheckpointProcessFigure 4.5: Overview of SPECCROSS: At compile time, the SPECCROSS compiler detectscode regions composed of consecutive parallel loop <strong>invocation</strong>s, parallelizes the code regionand inserts SPECCROSS library functions to enable barrier speculation. At <strong>runtime</strong>,the whole program is first executed speculatively without barriers. Once misspeculationoccurs, the checkpoint process is woken up. It kills the original child process and spawnsnew worker threads. The worker threads will re-execute the misspeculated epochs withnon-speculative barriers.misspeculation detection. Each worker thread executes its code section and sends requestsfor misspeculation detection to the checker thread. The checker thread detects dependenceviolations <strong>using</strong> access signatures computed by each worker thread. When the checkerthread detects a violation, it signals a separate checkpoint process for misspeculation recovery.The checkpoint process is periodically forked from the original process. Oncerecovery is completed by squashing all speculative workers and restoring state to the lastcheckpoint, misspeculated epochs are re-executed with non-speculative barriers.57