automatically exploiting cross-invocation parallelism using runtime ...
automatically exploiting cross-invocation parallelism using runtime ... automatically exploiting cross-invocation parallelism using runtime ...
4.5.4 Load Balancing Techniques . . . . . . . . . . . . . . . . . . . . . 754.5.5 Multi-threaded Program Checkpointing . . . . . . . . . . . . . . . 764.5.6 Dependence Distance Analysis . . . . . . . . . . . . . . . . . . . . 765 Evaluation 775.1 DOMORE Performance Evaluation . . . . . . . . . . . . . . . . . . . . . 785.2 SPECCROSS Performance Evaluation . . . . . . . . . . . . . . . . . . . . 825.3 Comparison of DOMORE, SPECCROSS and Previous Work . . . . . . . . 875.4 Case Study: FLUIDANIMATE . . . . . . . . . . . . . . . . . . . . . . . . 885.5 Limitations of Current Parallelizing Compiler Infrastructure . . . . . . . . 916 Conclusion and Future Direction 936.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 936.2 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94vi
List of Figures1.1 Scientists spend large amounts of time waiting for their program to generateresults. Among the 114 interviewed researchers from 20 differentdepartments in Princeton University, almost half of them had to wait days,weeks or even months for their simulation programs to finish. . . . . . . . . 31.2 Types of parallelism exploited in scientific research programs: one third ofthe interviewed researchers do not use any parallelism in their programs;others mainly use job parallelism or borrow already parallelized programs. 51.3 Example of parallelizing a program with barriers . . . . . . . . . . . . . . 71.4 Comparison between executions with and without barriers. A block withlabel x.y represents the y th iteration in the x th loop invocation. . . . . . . . 71.5 Contribution of this thesis work . . . . . . . . . . . . . . . . . . . . . . . . 92.1 Sequential Code with Two Loops . . . . . . . . . . . . . . . . . . . . . . . 132.2 Performance sensitivity due to memory analysis on a shared-memory machine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.3 Intra-invocation parallelization techniques which rely on static analysis(X.Y refers to the Y th statement in the X th iteration of the loop): (a)DOALL concurrently executes iterations among threads and no inter-threadsynchronization is necessary; (b) DOANY applies locks to guarantee atomicexecution of function malloc; (c) LOCALWRITE goes through eachnode and each worker thread only updates the node belonging to itself. . . 14vii
- Page 1 and 2: AUTOMATICALLY EXPLOITINGCROSS-INVOC
- Page 4 and 5: techniques. Among twenty programs f
- Page 6 and 7: in particular. Their professionalis
- Page 8 and 9: ContentsAbstract . . . . . . . . .
- Page 12 and 13: 2.4 Sequential Loop Example for DOA
- Page 14 and 15: 4.5 Overview of SPECCROSS: At compi
- Page 16 and 17: 1.1 Limitations of Existing Approac
- Page 18 and 19: advanced forms of parallelism (MPI,
- Page 20 and 21: the graph stands for an iteration i
- Page 22 and 23: 1.2 ContributionsFigure 1.5 demonst
- Page 24 and 25: 1.3 Dissertation OrganizationChapte
- Page 26 and 27: alias the array regular via inter-p
- Page 28 and 29: 1 for (i = 0; i < M; i++){2 node =
- Page 30 and 31: 1 cost = 0;2 node = list->head;3 Wh
- Page 32 and 33: example which cannot benefit from e
- Page 34 and 35: These techniques are referred to as
- Page 36 and 37: unnecessary overhead at runtime.Tab
- Page 38 and 39: Chapter 3Non-Speculatively Exploiti
- Page 40 and 41: for a variety of reasons. For insta
- Page 42 and 43: 12x11x10xDOMOREPthread Barrier9xLoo
- Page 44 and 45: Algorithm 1: Pseudo-code for schedu
- Page 46 and 47: Algorithm 2: Pseudo-code for worker
- Page 48 and 49: 3.3 Compiler ImplementationThe DOMO
- Page 50 and 51: to T i . DOMORE’s MTCG follows th
- Page 52 and 53: Outer_Preheaderbr BB1ABB1A:ind1 = P
- Page 54 and 55: Algorithm 3: Pseudo-code for genera
- Page 56 and 57: Scheduler Function SchedulerSync Fu
- Page 58 and 59: SchedulerWorker1 Worker2Worker3Work
4.5.4 Load Balancing Techniques . . . . . . . . . . . . . . . . . . . . . 754.5.5 Multi-threaded Program Checkpointing . . . . . . . . . . . . . . . 764.5.6 Dependence Distance Analysis . . . . . . . . . . . . . . . . . . . . 765 Evaluation 775.1 DOMORE Performance Evaluation . . . . . . . . . . . . . . . . . . . . . 785.2 SPECCROSS Performance Evaluation . . . . . . . . . . . . . . . . . . . . 825.3 Comparison of DOMORE, SPECCROSS and Previous Work . . . . . . . . 875.4 Case Study: FLUIDANIMATE . . . . . . . . . . . . . . . . . . . . . . . . 885.5 Limitations of Current Parallelizing Compiler Infrastructure . . . . . . . . 916 Conclusion and Future Direction 936.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 936.2 Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94vi