automatically exploiting cross-invocation parallelism using runtime ...
automatically exploiting cross-invocation parallelism using runtime ... automatically exploiting cross-invocation parallelism using runtime ...
8x7xno misspec.with misspec.Geomean Loop Speedup6x5x4x3x2x1x0x5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100Number of CheckpointingFigure 5.3: Loop speedup with and without misspeculation for execution with 24 threads:the number of checkpoints varies from 2 to 100. A misspeculation is randomly triggeredduring the speculative execution. With more checkpoints, overhead in checkpointing increases;however overhead in re-execution after misspeculation reduces.86
5.3 Comparison of DOMORE, SPECCROSS and PreviousWorkIn this section, we compare the performance improvement achieved by DOMORE andSPECCROSS with the best performance improvement reported in previous work [6, 10, 20,23, 29, 58]. Figure 5.4 shows the comparison results. For most programs evaluated, DO-MORE and SPECCROSS achieve better, or at least competitive performance improvement.Programs JACOBI, FDTD and SYMM were originally designed for Polyhedral optimizations[23]. They can be automatically parallelized by Polly optimizer in LLVM compilerinfrastructure. Polly uses an abstract mathematical representation to analyze the memoryaccess pattern of a program and automatically exploits thread-level and SIMD parallelism.Polly successfully achieves promising speedup for 16 out of 30 Polyhedral benchmarkprograms. However, it fails to extract enough parallelism for programs JACOBI, FDTDor SYMM due to the irregular access patterns of the outermost loops. Compared to Polly,SPECCROSS applies DOALL to the inner loops and exploits cross-invocation parallelismusing speculative barriers, and therefore achieves much better performance.Raman et al. manually parallelized BLACKSCHOLES using pipeline style parallelizationwith a multi-threaded transactional memory runtime system (SMTX [58]). They reportquite scalable performance improvement on BLACKSCHOLES. DOMORE is limited by theruntime overhead at large thread counts. Potential performance improvement is possible ifthe scheduler thread could be parallelized.CG and ECLAT were manually parallelized by DSWP+ technique [29]. DSWP+ performsas a manual equivalent of DOMORE parallelization. As shown in the graph, DO-MORE is able to achieve close performance gain as the manual parallelization.Both LOOPDEP and FLUIDANMIATE have a parallel implementation in their benchmarksuites. We compare the best performance results of their parallel versions with thebest performance achieved by DOMORE and SPECCROSS. Helix parallelized EQUAKE87
- Page 50 and 51: to T i . DOMORE’s MTCG follows th
- Page 52 and 53: Outer_Preheaderbr BB1ABB1A:ind1 = P
- Page 54 and 55: Algorithm 3: Pseudo-code for genera
- Page 56 and 57: Scheduler Function SchedulerSync Fu
- Page 58 and 59: SchedulerWorker1 Worker2Worker3Work
- Page 60 and 61: 3.5 Related Work3.5.1 Cross-invocat
- Page 62 and 63: ations during the inspecting proces
- Page 64 and 65: for (t = 0; t < STEP; t++) {L1: for
- Page 66 and 67: sequential_func() {for (t = 0; t <
- Page 68 and 69: Workerthread 1Workerthread 2Workert
- Page 70 and 71: library provides efficient misspecu
- Page 72 and 73: Workerthread 1TimeFigure 4.6: Timin
- Page 74 and 75: 4.2 SPECCROSS Runtime System4.2.1 M
- Page 76 and 77: takes up to 200MB memory space.To d
- Page 78 and 79: checkpoint, the child spawns new wo
- Page 80 and 81: Operation DescriptionFunctions for
- Page 82 and 83: Main thread:main() {init();create_t
- Page 84 and 85: implemented in the Liberty parallel
- Page 86 and 87: Algorithm 5: Pseudo-code for SPECCR
- Page 88 and 89: CROSS, since SPECCROSS can be appli
- Page 90 and 91: techniques.Synchronization via sche
- Page 92 and 93: Source Benchmark Function % of exec
- Page 94 and 95: applied to the outermost loop, gene
- Page 96 and 97: 5.2 SPECCROSS Performance Evaluatio
- Page 98 and 99: and the number of checking requests
- Page 102 and 103: This thesisPrevious workSpeedup (x)
- Page 104 and 105: Program Speedup6x5x4x3x2xLOCALWRITE
- Page 106 and 107: for DOMORE and SPECCROSS. Others (e
- Page 108 and 109: programs and it achieves a geomean
- Page 110 and 111: Bibliography[1] R. Allen and K. Ken
- Page 112 and 113: [15] R. Cytron. DOACROSS: Beyond ve
- Page 114 and 115: [31] T. B. Jablin, Y. Zhang, J. A.
- Page 116 and 117: [47] A. Nicolau, G. Li, A. V. Veide
- Page 118 and 119: [62] L. Rauchwerger and D. Padua. T
- Page 120: national conference on Parallel Arc
8x7xno misspec.with misspec.Geomean Loop Speedup6x5x4x3x2x1x0x5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100Number of CheckpointingFigure 5.3: Loop speedup with and without misspeculation for execution with 24 threads:the number of checkpoints varies from 2 to 100. A misspeculation is randomly triggeredduring the speculative execution. With more checkpoints, overhead in checkpointing increases;however overhead in re-execution after misspeculation reduces.86