automatically exploiting cross-invocation parallelism using runtime ...

automatically exploiting cross-invocation parallelism using runtime ... automatically exploiting cross-invocation parallelism using runtime ...

dataspace.princeton.edu
from dataspace.princeton.edu More from this publisher
13.07.2015 Views

8x7xno misspec.with misspec.Geomean Loop Speedup6x5x4x3x2x1x0x5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100Number of CheckpointingFigure 5.3: Loop speedup with and without misspeculation for execution with 24 threads:the number of checkpoints varies from 2 to 100. A misspeculation is randomly triggeredduring the speculative execution. With more checkpoints, overhead in checkpointing increases;however overhead in re-execution after misspeculation reduces.86

5.3 Comparison of DOMORE, SPECCROSS and PreviousWorkIn this section, we compare the performance improvement achieved by DOMORE andSPECCROSS with the best performance improvement reported in previous work [6, 10, 20,23, 29, 58]. Figure 5.4 shows the comparison results. For most programs evaluated, DO-MORE and SPECCROSS achieve better, or at least competitive performance improvement.Programs JACOBI, FDTD and SYMM were originally designed for Polyhedral optimizations[23]. They can be automatically parallelized by Polly optimizer in LLVM compilerinfrastructure. Polly uses an abstract mathematical representation to analyze the memoryaccess pattern of a program and automatically exploits thread-level and SIMD parallelism.Polly successfully achieves promising speedup for 16 out of 30 Polyhedral benchmarkprograms. However, it fails to extract enough parallelism for programs JACOBI, FDTDor SYMM due to the irregular access patterns of the outermost loops. Compared to Polly,SPECCROSS applies DOALL to the inner loops and exploits cross-invocation parallelismusing speculative barriers, and therefore achieves much better performance.Raman et al. manually parallelized BLACKSCHOLES using pipeline style parallelizationwith a multi-threaded transactional memory runtime system (SMTX [58]). They reportquite scalable performance improvement on BLACKSCHOLES. DOMORE is limited by theruntime overhead at large thread counts. Potential performance improvement is possible ifthe scheduler thread could be parallelized.CG and ECLAT were manually parallelized by DSWP+ technique [29]. DSWP+ performsas a manual equivalent of DOMORE parallelization. As shown in the graph, DO-MORE is able to achieve close performance gain as the manual parallelization.Both LOOPDEP and FLUIDANMIATE have a parallel implementation in their benchmarksuites. We compare the best performance results of their parallel versions with thebest performance achieved by DOMORE and SPECCROSS. Helix parallelized EQUAKE87

5.3 Comparison of DOMORE, SPECCROSS and PreviousWorkIn this section, we compare the performance improvement achieved by DOMORE andSPECCROSS with the best performance improvement reported in previous work [6, 10, 20,23, 29, 58]. Figure 5.4 shows the comparison results. For most programs evaluated, DO-MORE and SPECCROSS achieve better, or at least competitive performance improvement.Programs JACOBI, FDTD and SYMM were originally designed for Polyhedral optimizations[23]. They can be <strong>automatically</strong> parallelized by Polly optimizer in LLVM compilerinfrastructure. Polly uses an abstract mathematical representation to analyze the memoryaccess pattern of a program and <strong>automatically</strong> exploits thread-level and SIMD <strong>parallelism</strong>.Polly successfully achieves promising speedup for 16 out of 30 Polyhedral benchmarkprograms. However, it fails to extract enough <strong>parallelism</strong> for programs JACOBI, FDTDor SYMM due to the irregular access patterns of the outermost loops. Compared to Polly,SPECCROSS applies DOALL to the inner loops and exploits <strong>cross</strong>-<strong>invocation</strong> <strong>parallelism</strong><strong>using</strong> speculative barriers, and therefore achieves much better performance.Raman et al. manually parallelized BLACKSCHOLES <strong>using</strong> pipeline style parallelizationwith a multi-threaded transactional memory <strong>runtime</strong> system (SMTX [58]). They reportquite scalable performance improvement on BLACKSCHOLES. DOMORE is limited by the<strong>runtime</strong> overhead at large thread counts. Potential performance improvement is possible ifthe scheduler thread could be parallelized.CG and ECLAT were manually parallelized by DSWP+ technique [29]. DSWP+ performsas a manual equivalent of DOMORE parallelization. As shown in the graph, DO-MORE is able to achieve close performance gain as the manual parallelization.Both LOOPDEP and FLUIDANMIATE have a parallel implementation in their benchmarksuites. We compare the best performance results of their parallel versions with thebest performance achieved by DOMORE and SPECCROSS. Helix parallelized EQUAKE87

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!