automatically exploiting cross-invocation parallelism using runtime ...
automatically exploiting cross-invocation parallelism using runtime ... automatically exploiting cross-invocation parallelism using runtime ...
and the number of checking requests for execution with 24 threads.The performanceresults (Figure 5.2) indicate that with higher thread counts, the checker thread may becomethe bottleneck. In particular, the performance of SPECCROSS scales up to 18 threads andeither flattens or decreases after that. The effects of checker thread in limiting performancecan be illustrated by considering the example of LLUBENCH. The number of checkingrequests for LLUBENCH increases by 3.3× when going from 8 threads to 24 threads,with the resulting performance improvements being minimal. Parallelizing dependenceviolation detection in the checker thread is one option to solve this problem and is part offuture work.Checkpointing is much more expensive than signature calculation or checking operationsand hence is done infrequently. For benchmark programs evaluated, there are lessthan 10 checkpoints, since SPECCROSS by default checkpoints every 1000 epochs. However,frequency of checkpointing can be reconfigured depending on desired performancecharacteristics. As a demonstration of the impact of checkpointing on performance, Figure5.3 shows the geomean speedup results of increasing the number of checkpoints from2 to 100, for all of the eight benchmark programs.In order to evaluate the overhead of the whole recovery process, we randomly triggereda misspeculation during the speculative parallel execution. Evaluation results are shown inFigure 5.3. As can be seen, more checkpoints increases the overhead at runtime, howeveralso reduce the time spent in re-execution once misspeculation happens. Finding an optimalconfiguration for them is important and will be part of the future work.84
12x11x10xSpecCrossPthread Barrier12x11x10xSpecCrossPthread Barrier9x9xLoop Speedup8x7x6x5x4xLoop Speedup8x7x6x5x4x3x3x2x2x1x1x0x2 4 6 8 10 12 14 16 18 20 22 24Number of Threads(a) CG0x2 4 6 8 10 12 14 16 18 20 22 24Number of Threads(b) EQUAKE12x11x10xSpecCrossPthread Barrier12x11x10xPthread BarrierSpecCross9x9xLoop Speedup8x7x6x5x4xProgram Speedup8x7x6x5x4x3x3x2x2x1x1x0x2 4 6 8 10 12 14 16 18 20 22 24Number of Threads(c) FDTD0x2 4 6 8 10 12 14 16 18 20 22 24Number of Threads(d) FLUIDANIMATE-212x11x10xSpecCrossPthread Barrier12x11x10xSpecCrossPthread Barrier9x9xLoop Speedup8x7x6x5x4xLoop Speedup8x7x6x5x4x3x3x2x2x1x1x0x2 4 6 8 10 12 14 16 18 20 22 24Number of Threads(e) JACOBI0x2 4 6 8 10 12 14 16 18 20 22 24Number of Threads(f) LLUBENCH12x11x10xSpecCrossPthread Barrier12x11x10xSpecCrossPthread Barrier9x9xLoop Speedup8x7x6x5x4xLoop Speedup8x7x6x5x4x3x3x2x2x1x1x0x2 4 6 8 10 12 14 16 18 20 22 24Number of Threads(g) LOOPDEP0x2 4 6 8 10 12 14 16 18 20 22 24Number of Threads(h) SYMMFigure 5.2: Performance comparison between code parallelized with pthread barrier andSPECCROSS.85
- Page 48 and 49: 3.3 Compiler ImplementationThe DOMO
- Page 50 and 51: to T i . DOMORE’s MTCG follows th
- Page 52 and 53: Outer_Preheaderbr BB1ABB1A:ind1 = P
- Page 54 and 55: Algorithm 3: Pseudo-code for genera
- Page 56 and 57: Scheduler Function SchedulerSync Fu
- Page 58 and 59: SchedulerWorker1 Worker2Worker3Work
- Page 60 and 61: 3.5 Related Work3.5.1 Cross-invocat
- Page 62 and 63: ations during the inspecting proces
- Page 64 and 65: for (t = 0; t < STEP; t++) {L1: for
- Page 66 and 67: sequential_func() {for (t = 0; t <
- Page 68 and 69: Workerthread 1Workerthread 2Workert
- Page 70 and 71: library provides efficient misspecu
- Page 72 and 73: Workerthread 1TimeFigure 4.6: Timin
- Page 74 and 75: 4.2 SPECCROSS Runtime System4.2.1 M
- Page 76 and 77: takes up to 200MB memory space.To d
- Page 78 and 79: checkpoint, the child spawns new wo
- Page 80 and 81: Operation DescriptionFunctions for
- Page 82 and 83: Main thread:main() {init();create_t
- Page 84 and 85: implemented in the Liberty parallel
- Page 86 and 87: Algorithm 5: Pseudo-code for SPECCR
- Page 88 and 89: CROSS, since SPECCROSS can be appli
- Page 90 and 91: techniques.Synchronization via sche
- Page 92 and 93: Source Benchmark Function % of exec
- Page 94 and 95: applied to the outermost loop, gene
- Page 96 and 97: 5.2 SPECCROSS Performance Evaluatio
- Page 100 and 101: 8x7xno misspec.with misspec.Geomean
- Page 102 and 103: This thesisPrevious workSpeedup (x)
- Page 104 and 105: Program Speedup6x5x4x3x2xLOCALWRITE
- Page 106 and 107: for DOMORE and SPECCROSS. Others (e
- Page 108 and 109: programs and it achieves a geomean
- Page 110 and 111: Bibliography[1] R. Allen and K. Ken
- Page 112 and 113: [15] R. Cytron. DOACROSS: Beyond ve
- Page 114 and 115: [31] T. B. Jablin, Y. Zhang, J. A.
- Page 116 and 117: [47] A. Nicolau, G. Li, A. V. Veide
- Page 118 and 119: [62] L. Rauchwerger and D. Padua. T
- Page 120: national conference on Parallel Arc
and the number of checking requests for execution with 24 threads.The performanceresults (Figure 5.2) indicate that with higher thread counts, the checker thread may becomethe bottleneck. In particular, the performance of SPECCROSS scales up to 18 threads andeither flattens or decreases after that. The effects of checker thread in limiting performancecan be illustrated by considering the example of LLUBENCH. The number of checkingrequests for LLUBENCH increases by 3.3× when going from 8 threads to 24 threads,with the resulting performance improvements being minimal. Parallelizing dependenceviolation detection in the checker thread is one option to solve this problem and is part offuture work.Checkpointing is much more expensive than signature calculation or checking operationsand hence is done infrequently. For benchmark programs evaluated, there are lessthan 10 checkpoints, since SPECCROSS by default checkpoints every 1000 epochs. However,frequency of checkpointing can be reconfigured depending on desired performancecharacteristics. As a demonstration of the impact of checkpointing on performance, Figure5.3 shows the geomean speedup results of increasing the number of checkpoints from2 to 100, for all of the eight benchmark programs.In order to evaluate the overhead of the whole recovery process, we randomly triggereda misspeculation during the speculative parallel execution. Evaluation results are shown inFigure 5.3. As can be seen, more checkpoints increases the overhead at <strong>runtime</strong>, howeveralso reduce the time spent in re-execution once misspeculation happens. Finding an optimalconfiguration for them is important and will be part of the future work.84