automatically exploiting cross-invocation parallelism using runtime ...
automatically exploiting cross-invocation parallelism using runtime ... automatically exploiting cross-invocation parallelism using runtime ...
4.5 Overview of SPECCROSS: At compile time, the SPECCROSS compiler detectscode regions composed of consecutive parallel loop invocations, parallelizesthe code region and inserts SPECCROSS library functions to enablebarrier speculation. At runtime, the whole program is first executed speculativelywithout barriers. Once misspeculation occurs, the checkpoint processis woken up. It kills the original child process and spawns new workerthreads. The worker threads will re-execute the misspeculated epochs withnon-speculative barriers. . . . . . . . . . . . . . . . . . . . . . . . . . . . 574.6 Timing diagram for SPECCROSS showing epoch and task numbers. Ablock with label indicates that the thread updates its epoch numberto A and task number to B when the task starts executing. . . . . . . . . 584.7 Pseudo-code for worker threads and checker thread . . . . . . . . . . . . . 594.8 Data structure for Signature Log . . . . . . . . . . . . . . . . . . . . . . . 634.9 Demonstration of using SPECCROSS runtime library in a parallel program . 685.1 Performance comparison between code parallelized with pthread barrierand DOMORE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815.2 Performance comparison between code parallelized with pthread barrierand SPECCROSS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 855.3 Loop speedup with and without misspeculation for execution with 24 threads:the number of checkpoints varies from 2 to 100. A misspeculation is randomlytriggered during the speculative execution. With more checkpoints,overhead in checkpointing increases; however overhead in re-execution aftermisspeculation reduces. . . . . . . . . . . . . . . . . . . . . . . . . . . 865.4 Best performance achieved by this thesis work and previous work . . . . . 885.5 Outermost loop in FLUIDANIMATE . . . . . . . . . . . . . . . . . . . . . 895.6 Performance improvement of FLUIDANIMATE using different techniques. 90x
Chapter 1IntroductionThe computing industry has relied on steadily increasing clock speeds and uniprocessormicro-architectural improvements to deliver reliable performance enhancements for a widerange of applications. Unfortunately, since 2004, the microprocessor industry fell off pasttrends due to increasingly unmanageable design complexity, power and thermal issues. Inspite of this stall in processor performance improvements, Moores Law still remains in effect.Consistent with historic trends, the semiconductor industry continues to double thenumber of transistors integrated onto a single die every two years. Since conventionalapproaches to improving program performance with these transistors has faltered, microprocessormanufacturers leverage these additional transistors by placing multiple cores onthe same die. These multi-core processors can improve system throughput and potentiallyspeed up multi-threaded applications, but the latency of any single-thread of executionremains unchanged. Consequently, to take full advantage of multi-core processors, applicationsmust be multi-threaded, and they must be designed to efficiently use the resourcesprovided by the processor.1
- Page 1 and 2: AUTOMATICALLY EXPLOITINGCROSS-INVOC
- Page 4 and 5: techniques. Among twenty programs f
- Page 6 and 7: in particular. Their professionalis
- Page 8 and 9: ContentsAbstract . . . . . . . . .
- Page 10 and 11: 4.5.4 Load Balancing Techniques . .
- Page 12 and 13: 2.4 Sequential Loop Example for DOA
- Page 16 and 17: 1.1 Limitations of Existing Approac
- Page 18 and 19: advanced forms of parallelism (MPI,
- Page 20 and 21: the graph stands for an iteration i
- Page 22 and 23: 1.2 ContributionsFigure 1.5 demonst
- Page 24 and 25: 1.3 Dissertation OrganizationChapte
- Page 26 and 27: alias the array regular via inter-p
- Page 28 and 29: 1 for (i = 0; i < M; i++){2 node =
- Page 30 and 31: 1 cost = 0;2 node = list->head;3 Wh
- Page 32 and 33: example which cannot benefit from e
- Page 34 and 35: These techniques are referred to as
- Page 36 and 37: unnecessary overhead at runtime.Tab
- Page 38 and 39: Chapter 3Non-Speculatively Exploiti
- Page 40 and 41: for a variety of reasons. For insta
- Page 42 and 43: 12x11x10xDOMOREPthread Barrier9xLoo
- Page 44 and 45: Algorithm 1: Pseudo-code for schedu
- Page 46 and 47: Algorithm 2: Pseudo-code for worker
- Page 48 and 49: 3.3 Compiler ImplementationThe DOMO
- Page 50 and 51: to T i . DOMORE’s MTCG follows th
- Page 52 and 53: Outer_Preheaderbr BB1ABB1A:ind1 = P
- Page 54 and 55: Algorithm 3: Pseudo-code for genera
- Page 56 and 57: Scheduler Function SchedulerSync Fu
- Page 58 and 59: SchedulerWorker1 Worker2Worker3Work
- Page 60 and 61: 3.5 Related Work3.5.1 Cross-invocat
- Page 62 and 63: ations during the inspecting proces
Chapter 1IntroductionThe computing industry has relied on steadily increasing clock speeds and uniprocessormicro-architectural improvements to deliver reliable performance enhancements for a widerange of applications. Unfortunately, since 2004, the microprocessor industry fell off pasttrends due to increasingly unmanageable design complexity, power and thermal issues. Inspite of this stall in processor performance improvements, Moores Law still remains in effect.Consistent with historic trends, the semiconductor industry continues to double thenumber of transistors integrated onto a single die every two years. Since conventionalapproaches to improving program performance with these transistors has faltered, microprocessormanufacturers leverage these additional transistors by placing multiple cores onthe same die. These multi-core processors can improve system throughput and potentiallyspeed up multi-threaded applications, but the latency of any single-thread of executionremains unchanged. Consequently, to take full advantage of multi-core processors, applicationsmust be multi-threaded, and they must be designed to efficiently use the resourcesprovided by the processor.1