Introduction to CUDA C
Introduction to CUDA C Introduction to CUDA C
Parallel Dot Product Recap � We perform parallel, pairwise multiplications � Shared memory stores each thread’s result � We sum these pairwise products from a single thread � Sounds good…but we’ve made a huge mistake
Faulty Dot Product Exposed! � Step 1: In parallel parallel, each thread writes a pairwise product __shared__ int temp � Step 2: Thread 0 reads and sums the products __shared__ int temp � But there’s an assumption hidden in Step 1…
- Page 1 and 2: Introduction to CUDA C San Jose Con
- Page 3 and 4: What is CUDA? � CUDA Architecture
- Page 5 and 6: CUDA C Prerequisites � You (proba
- Page 7 and 8: Hello, World! int main( void ) { pr
- Page 9 and 10: Hello, World! with Device Code __gl
- Page 11 and 12: A More Complex Example � A simple
- Page 13 and 14: Memory Management � Host and devi
- Page 15 and 16: A More Complex Example: main() int
- Page 17 and 18: Parallel Programming in CUDA C �
- Page 19 and 20: Parallel Programming in CUDA C �
- Page 21 and 22: Parallel Addition: main() #define N
- Page 23 and 24: Review � Difference between “ho
- Page 25 and 26: Threads � Terminology: A block ca
- Page 27 and 28: Parallel Addition (Threads): main()
- Page 29 and 30: Indexing Arrays With Threads And Bl
- Page 31 and 32: Addition with Threads and Blocks
- Page 33 and 34: Parallel Addition (Blocks/Threads):
- Page 35 and 36: Dot Product � Unlike vector addit
- Page 37 and 38: Dot Product � But we need to shar
- Page 39: Parallel Dot Product: dot() � We
- Page 43 and 44: Synchronization � We need threads
- Page 45 and 46: Parallel Dot Product: dot() __globa
- Page 47 and 48: Parallel Dot Product: main() } // c
- Page 49 and 50: Review (cont) � Using __shared__
- Page 51 and 52: Multiblock Dot Product: Algorithm
- Page 53 and 54: Multiblock Dot Product: dot() #defi
- Page 55 and 56: Global Memory Contention Block 0 su
- Page 57 and 58: Atomic Operations � Terminology:
- Page 59 and 60: Parallel Dot Product: main() #defin
- Page 61 and 62: Review � Race conditions — Beha
- Page 63: Questions � First my questions
Faulty Dot Product Exposed!<br />
� Step 1: In parallel<br />
parallel, each thread writes a pairwise product<br />
__shared__ int temp<br />
� Step 2: Thread 0 reads and sums the products<br />
__shared__ int temp<br />
� But there’s an assumption hidden in Step 1…