automatically exploiting cross-invocation parallelism using runtime ...
automatically exploiting cross-invocation parallelism using runtime ... automatically exploiting cross-invocation parallelism using runtime ...
applied to the outermost loop, generating a parallel program with the redundant code in thescheduler thread and each inner loop iteration is scheduled only to the appropriate ownerthread. Although DOMORE reduces the overhead of redundant computation, partitioningthe redundant code to the scheduler increases the size of the sequential region, whichbecomes the major factor limiting the scalability in this case.SYMM from the PolyBench [54] suite demonstrates the capabilities of a very simplemulti-grid solver in computing a three dimensional potential field. The target loop is athree-level nested-loop. DOALL applicable to the second level inner loop. As shownin the results, even after DOMORE optimization, the scalability of SYMM is poor. Themajor cause is that the execution time of each inner loop invocation only takes about 4,000clock cycles. With increasing number of threads, the overhead involved in multi-threadingoutweighs all performance gain.The performance of DOMORE is limited by the sequential scheduler thread at largethread counts. To address this problem, we could parallelize the computeAddr function.The algorithm proposed in [36] can be adopted to achieve that purpose. This will be thefuture work.80
12x11x10xDOMOREPthread Barrier12x11x10xDOMOREPthread Barrier9x9xLoop Speedup8x7x6x5x4xLoop Speedup8x7x6x5x4x3x3x2x2x1x1x0x2 4 6 8 10 12 14 16 18 20 22 24Number of Threads(a) BLACKSCHOLES0x2 4 6 8 10 12 14 16 18 20 22 24Number of Threads(b) CG12x11x10xDOMOREPthread Barrier12x11x10xDOMOREPthread Barrier9x9xLoop Speedup8x7x6x5x4xLoop Speedup8x7x6x5x4x3x3x2x2x1x1x0x2 4 6 8 10 12 14 16 18 20 22 24Number of Threads(c) ECLAT0x2 4 6 8 10 12 14 16 18 20 22 24Number of Threads(d) FLUIDANIMATE-112x11x10xDOMOREPthread Barrier12x11x10xDOMOREPthread Barrier9x9xLoop Speedup8x7x6x5x4xLoop Speedup8x7x6x5x4x3x3x2x2x1x1x0x2 4 6 8 10 12 14 16 18 20 22 24Number of Threads(e) LLUBENCH0x2 4 6 8 10 12 14 16 18 20 22 24Number of Threads(f) SYMMFigure 5.1: Performance comparison between code parallelized with pthread barrier andDOMORE.81
- Page 44 and 45: Algorithm 1: Pseudo-code for schedu
- Page 46 and 47: Algorithm 2: Pseudo-code for worker
- Page 48 and 49: 3.3 Compiler ImplementationThe DOMO
- Page 50 and 51: to T i . DOMORE’s MTCG follows th
- Page 52 and 53: Outer_Preheaderbr BB1ABB1A:ind1 = P
- Page 54 and 55: Algorithm 3: Pseudo-code for genera
- Page 56 and 57: Scheduler Function SchedulerSync Fu
- Page 58 and 59: SchedulerWorker1 Worker2Worker3Work
- Page 60 and 61: 3.5 Related Work3.5.1 Cross-invocat
- Page 62 and 63: ations during the inspecting proces
- Page 64 and 65: for (t = 0; t < STEP; t++) {L1: for
- Page 66 and 67: sequential_func() {for (t = 0; t <
- Page 68 and 69: Workerthread 1Workerthread 2Workert
- Page 70 and 71: library provides efficient misspecu
- Page 72 and 73: Workerthread 1TimeFigure 4.6: Timin
- Page 74 and 75: 4.2 SPECCROSS Runtime System4.2.1 M
- Page 76 and 77: takes up to 200MB memory space.To d
- Page 78 and 79: checkpoint, the child spawns new wo
- Page 80 and 81: Operation DescriptionFunctions for
- Page 82 and 83: Main thread:main() {init();create_t
- Page 84 and 85: implemented in the Liberty parallel
- Page 86 and 87: Algorithm 5: Pseudo-code for SPECCR
- Page 88 and 89: CROSS, since SPECCROSS can be appli
- Page 90 and 91: techniques.Synchronization via sche
- Page 92 and 93: Source Benchmark Function % of exec
- Page 96 and 97: 5.2 SPECCROSS Performance Evaluatio
- Page 98 and 99: and the number of checking requests
- Page 100 and 101: 8x7xno misspec.with misspec.Geomean
- Page 102 and 103: This thesisPrevious workSpeedup (x)
- Page 104 and 105: Program Speedup6x5x4x3x2xLOCALWRITE
- Page 106 and 107: for DOMORE and SPECCROSS. Others (e
- Page 108 and 109: programs and it achieves a geomean
- Page 110 and 111: Bibliography[1] R. Allen and K. Ken
- Page 112 and 113: [15] R. Cytron. DOACROSS: Beyond ve
- Page 114 and 115: [31] T. B. Jablin, Y. Zhang, J. A.
- Page 116 and 117: [47] A. Nicolau, G. Li, A. V. Veide
- Page 118 and 119: [62] L. Rauchwerger and D. Padua. T
- Page 120: national conference on Parallel Arc
12x11x10xDOMOREPthread Barrier12x11x10xDOMOREPthread Barrier9x9xLoop Speedup8x7x6x5x4xLoop Speedup8x7x6x5x4x3x3x2x2x1x1x0x2 4 6 8 10 12 14 16 18 20 22 24Number of Threads(a) BLACKSCHOLES0x2 4 6 8 10 12 14 16 18 20 22 24Number of Threads(b) CG12x11x10xDOMOREPthread Barrier12x11x10xDOMOREPthread Barrier9x9xLoop Speedup8x7x6x5x4xLoop Speedup8x7x6x5x4x3x3x2x2x1x1x0x2 4 6 8 10 12 14 16 18 20 22 24Number of Threads(c) ECLAT0x2 4 6 8 10 12 14 16 18 20 22 24Number of Threads(d) FLUIDANIMATE-112x11x10xDOMOREPthread Barrier12x11x10xDOMOREPthread Barrier9x9xLoop Speedup8x7x6x5x4xLoop Speedup8x7x6x5x4x3x3x2x2x1x1x0x2 4 6 8 10 12 14 16 18 20 22 24Number of Threads(e) LLUBENCH0x2 4 6 8 10 12 14 16 18 20 22 24Number of Threads(f) SYMMFigure 5.1: Performance comparison between code parallelized with pthread barrier andDOMORE.81