Introduction to CUDA C
Introduction to CUDA C Introduction to CUDA C
Multiblock Dot Product: dot() __global__ void dot( int *a, int *b, int *c ) { __shared__ int temp[THREADS_PER_BLOCK]; int index = threadIdx.x + blockIdx.x * blockDim.x; temp[threadIdx.x] = a[index] * b[index]; } __syncthreads(); if( 0 == threadIdx.x ) { int sum = 0; for( int i = 0; i < THREADS_PER_BLOCK; i++ ) sum += temp[i]; atomicAdd( c , sum ); } � Now let’s fix up main() to handle a multiblock dot product
Parallel Dot Product: main() #define N (2048*2048) #define THREADS_PER_BLOCK 512 int main( void ) { int *a, *b, *c; // host copies of a, b, c int *dev_a, *dev_b, *dev_c; // device copies of a, b, c int size = N * sizeof( int ); // we need space for N ints // allocate device copies of a, b, c cudaMalloc( (void**)&dev_a, size ); cudaMalloc( (void**)&dev_b, size ); cudaMalloc( (void**)&dev_c, sizeof( int ) ); a = (int *)malloc( size ); b = (int *)malloc( size ); c = (int *)malloc( sizeof( int ) ); random_ints( a, N ); random_ints( b, N );
- Page 7 and 8: Hello, World! int main( void ) { pr
- Page 9 and 10: Hello, World! with Device Code __gl
- Page 11 and 12: A More Complex Example � A simple
- Page 13 and 14: Memory Management � Host and devi
- Page 15 and 16: A More Complex Example: main() int
- Page 17 and 18: Parallel Programming in CUDA C �
- Page 19 and 20: Parallel Programming in CUDA C �
- Page 21 and 22: Parallel Addition: main() #define N
- Page 23 and 24: Review � Difference between “ho
- Page 25 and 26: Threads � Terminology: A block ca
- Page 27 and 28: Parallel Addition (Threads): main()
- Page 29 and 30: Indexing Arrays With Threads And Bl
- Page 31 and 32: Addition with Threads and Blocks
- Page 33 and 34: Parallel Addition (Blocks/Threads):
- Page 35 and 36: Dot Product � Unlike vector addit
- Page 37 and 38: Dot Product � But we need to shar
- Page 39 and 40: Parallel Dot Product: dot() � We
- Page 41 and 42: Faulty Dot Product Exposed! � Ste
- Page 43 and 44: Synchronization � We need threads
- Page 45 and 46: Parallel Dot Product: dot() __globa
- Page 47 and 48: Parallel Dot Product: main() } // c
- Page 49 and 50: Review (cont) � Using __shared__
- Page 51 and 52: Multiblock Dot Product: Algorithm
- Page 53 and 54: Multiblock Dot Product: dot() #defi
- Page 55 and 56: Global Memory Contention Block 0 su
- Page 57: Atomic Operations � Terminology:
- Page 61 and 62: Review � Race conditions — Beha
- Page 63: Questions � First my questions
Parallel Dot Product: main()<br />
#define N (2048*2048)<br />
#define THREADS_PER_BLOCK 512<br />
int main( void ) {<br />
int *a, *b, *c; // host copies of a, b, c<br />
int *dev_a, *dev_b, *dev_c; // device copies of a, b, c<br />
int size = N * sizeof( int ); // we need space for N ints<br />
// allocate device copies of a, b, c<br />
cudaMalloc( (void**)&dev_a, size );<br />
cudaMalloc( (void**)&dev_b, size );<br />
cudaMalloc( (void**)&dev_c, sizeof( int ) );<br />
a = (int *)malloc( size );<br />
b = (int *)malloc( size );<br />
c = (int *)malloc( sizeof( int ) );<br />
random_ints( a, N );<br />
random_ints( b, N );