automatically exploiting cross-invocation parallelism using runtime ...
automatically exploiting cross-invocation parallelism using runtime ... automatically exploiting cross-invocation parallelism using runtime ...
12x11x10xDOMOREPthread Barrier9xLoop Speedup8x7x6x5x4x3x2x1x0x2 4 6 8 10 12 14 16 18 20 22 24Number of ThreadsFigure 3.3: Performance improvement of CG with and without DOMORE.3.2 Runtime SynchronizationDOMORE’s runtime synchronization system consists of three parts: detection of dependencesat runtime, generation of synchronization conditions, and synchronization of iterationsacross threads. The pseudo-code for the scheduler and worker threads appear inAlgorithms 1 and 2.3.2.1 Detecting DependencesDOMORE’s scheduler thread detects dependences which manifest at runtime. Shadowmemory is employed for determining memory dependences. Each entry in shadow memorycontains a tuple consisting of a thread ID (tid) and an iteration number (iterNum). Foreach iteration, the scheduler determines which memory addresses the worker will accessusing computeAddr function. The computeAddr function collects these addressesby redundantly executing related instructions duplicated from the inner loop. Details ofautomatic generation of computeAddr can be found in Section 3.3.4. The scheduler28
maps each of these address to a shadow memory entry and updates that entry to indicatethat the most recent access to the respective memory location is by worker thread tid initeration iterNum. When an address is accessed by two different threads, the schedulersynchronizes the affected threads.Although the use of shadow memory increases memory overhead, our experimentsdemonstrate that a shadow array with 65568 entries (about 0.5MB) is efficient enough fordetecting dynamic dependences. However, a more space efficient conflict detecting schemecan also be used by DOMORE. For example, Mehrara et al. [42] propose a lightweightmemory signature scheme to detect memory conflicts. The best time-space trade-off dependson end-user requirements.3.2.2 Generating Synchronization ConditionsIf two iterations dynamically depend on each other, worker threads assigned to executethem must synchronize. This requires collaboration between scheduler and worker threads.The scheduler constructs synchronization conditions and sends them to the scheduledworker thread. A synchronization condition is a tuple also consisting of a threadID (depId) and an iteration number (depIterNum).A synchronization conditiontells a worker thread to wait for another worker (depId) to finish a particular iteration(depIterNum).To indicate that a worker thread is ready to execute a particular iteration, the schedulingthread sends the worker thread a special tuple. The first element is a token (NO SYNC)indicating no further synchronization is necessary to execute the iteration specified by thesecond element (iterNum).Suppose a dependence is detected while scheduling iteration i to worker thread T1.T1 accesses the memory location ADDR in iteration i. Shadow array (shadow[ADDR])records that the same memory location is most recently accessed by worker thread T2 initeration j. The scheduler thread will send (T2,j) to thread T1. When the scheduler29
- Page 1 and 2: AUTOMATICALLY EXPLOITINGCROSS-INVOC
- Page 4 and 5: techniques. Among twenty programs f
- Page 6 and 7: in particular. Their professionalis
- Page 8 and 9: ContentsAbstract . . . . . . . . .
- Page 10 and 11: 4.5.4 Load Balancing Techniques . .
- Page 12 and 13: 2.4 Sequential Loop Example for DOA
- Page 14 and 15: 4.5 Overview of SPECCROSS: At compi
- Page 16 and 17: 1.1 Limitations of Existing Approac
- Page 18 and 19: advanced forms of parallelism (MPI,
- Page 20 and 21: the graph stands for an iteration i
- Page 22 and 23: 1.2 ContributionsFigure 1.5 demonst
- Page 24 and 25: 1.3 Dissertation OrganizationChapte
- Page 26 and 27: alias the array regular via inter-p
- Page 28 and 29: 1 for (i = 0; i < M; i++){2 node =
- Page 30 and 31: 1 cost = 0;2 node = list->head;3 Wh
- Page 32 and 33: example which cannot benefit from e
- Page 34 and 35: These techniques are referred to as
- Page 36 and 37: unnecessary overhead at runtime.Tab
- Page 38 and 39: Chapter 3Non-Speculatively Exploiti
- Page 40 and 41: for a variety of reasons. For insta
- Page 44 and 45: Algorithm 1: Pseudo-code for schedu
- Page 46 and 47: Algorithm 2: Pseudo-code for worker
- Page 48 and 49: 3.3 Compiler ImplementationThe DOMO
- Page 50 and 51: to T i . DOMORE’s MTCG follows th
- Page 52 and 53: Outer_Preheaderbr BB1ABB1A:ind1 = P
- Page 54 and 55: Algorithm 3: Pseudo-code for genera
- Page 56 and 57: Scheduler Function SchedulerSync Fu
- Page 58 and 59: SchedulerWorker1 Worker2Worker3Work
- Page 60 and 61: 3.5 Related Work3.5.1 Cross-invocat
- Page 62 and 63: ations during the inspecting proces
- Page 64 and 65: for (t = 0; t < STEP; t++) {L1: for
- Page 66 and 67: sequential_func() {for (t = 0; t <
- Page 68 and 69: Workerthread 1Workerthread 2Workert
- Page 70 and 71: library provides efficient misspecu
- Page 72 and 73: Workerthread 1TimeFigure 4.6: Timin
- Page 74 and 75: 4.2 SPECCROSS Runtime System4.2.1 M
- Page 76 and 77: takes up to 200MB memory space.To d
- Page 78 and 79: checkpoint, the child spawns new wo
- Page 80 and 81: Operation DescriptionFunctions for
- Page 82 and 83: Main thread:main() {init();create_t
- Page 84 and 85: implemented in the Liberty parallel
- Page 86 and 87: Algorithm 5: Pseudo-code for SPECCR
- Page 88 and 89: CROSS, since SPECCROSS can be appli
- Page 90 and 91: techniques.Synchronization via sche
12x11x10xDOMOREPthread Barrier9xLoop Speedup8x7x6x5x4x3x2x1x0x2 4 6 8 10 12 14 16 18 20 22 24Number of ThreadsFigure 3.3: Performance improvement of CG with and without DOMORE.3.2 Runtime SynchronizationDOMORE’s <strong>runtime</strong> synchronization system consists of three parts: detection of dependencesat <strong>runtime</strong>, generation of synchronization conditions, and synchronization of iterationsa<strong>cross</strong> threads. The pseudo-code for the scheduler and worker threads appear inAlgorithms 1 and 2.3.2.1 Detecting DependencesDOMORE’s scheduler thread detects dependences which manifest at <strong>runtime</strong>. Shadowmemory is employed for determining memory dependences. Each entry in shadow memorycontains a tuple consisting of a thread ID (tid) and an iteration number (iterNum). Foreach iteration, the scheduler determines which memory addresses the worker will access<strong>using</strong> computeAddr function. The computeAddr function collects these addressesby redundantly executing related instructions duplicated from the inner loop. Details ofautomatic generation of computeAddr can be found in Section 3.3.4. The scheduler28