Introduction to CUDA C
Introduction to CUDA C Introduction to CUDA C
__syncthreads() � We can synchronize threads with the function __syncthreads() � Threads in the block wait until all threads have hit the __syncthreads() Thread 0 Thread 1 Thread 2 Thread 3 Thread 4 … __syncthreads() __syncthreads() __syncthreads() __syncthreads() __syncthreads() � Threads are only synchronized within a block
Parallel Dot Product: dot() __global__ void dot( int *a, int *b, int *c ) { __shared__ int temp[N]; temp[threadIdx.x] = a[threadIdx.x] * b[threadIdx.x]; } __syncthreads(); if( 0 == threadIdx.x ) { int sum = 0; for( int i = 0; i < N; i++ ) sum += temp[i]; *c = sum; } � With a properly synchronized dot() routine, let’s look at main()
- Page 1 and 2: Introduction to CUDA C San Jose Con
- Page 3 and 4: What is CUDA? � CUDA Architecture
- Page 5 and 6: CUDA C Prerequisites � You (proba
- Page 7 and 8: Hello, World! int main( void ) { pr
- Page 9 and 10: Hello, World! with Device Code __gl
- Page 11 and 12: A More Complex Example � A simple
- Page 13 and 14: Memory Management � Host and devi
- Page 15 and 16: A More Complex Example: main() int
- Page 17 and 18: Parallel Programming in CUDA C �
- Page 19 and 20: Parallel Programming in CUDA C �
- Page 21 and 22: Parallel Addition: main() #define N
- Page 23 and 24: Review � Difference between “ho
- Page 25 and 26: Threads � Terminology: A block ca
- Page 27 and 28: Parallel Addition (Threads): main()
- Page 29 and 30: Indexing Arrays With Threads And Bl
- Page 31 and 32: Addition with Threads and Blocks
- Page 33 and 34: Parallel Addition (Blocks/Threads):
- Page 35 and 36: Dot Product � Unlike vector addit
- Page 37 and 38: Dot Product � But we need to shar
- Page 39 and 40: Parallel Dot Product: dot() � We
- Page 41 and 42: Faulty Dot Product Exposed! � Ste
- Page 43: Synchronization � We need threads
- Page 47 and 48: Parallel Dot Product: main() } // c
- Page 49 and 50: Review (cont) � Using __shared__
- Page 51 and 52: Multiblock Dot Product: Algorithm
- Page 53 and 54: Multiblock Dot Product: dot() #defi
- Page 55 and 56: Global Memory Contention Block 0 su
- Page 57 and 58: Atomic Operations � Terminology:
- Page 59 and 60: Parallel Dot Product: main() #defin
- Page 61 and 62: Review � Race conditions — Beha
- Page 63: Questions � First my questions
Parallel Dot Product: dot()<br />
__global__ void dot( int *a, int *b, int *c ) {<br />
__shared__ int temp[N];<br />
temp[threadIdx.x] = a[threadIdx.x] * b[threadIdx.x];<br />
}<br />
__syncthreads();<br />
if( 0 == threadIdx.x ) {<br />
int sum = 0;<br />
for( int i = 0; i < N; i++ )<br />
sum += temp[i];<br />
*c = sum;<br />
}<br />
� With a properly synchronized dot() routine, let’s look at main()