C++ for Scientists - Technische Universität Dresden
C++ for Scientists - Technische Universität Dresden C++ for Scientists - Technische Universität Dresden
260 CHAPTER 13. PARALLELISM } MPI Finalize(); return 0 ; 13.2.2 Generic Message Passing Everybody sends to process number 0. #include #include #include int main (int argc, char∗ argv[]) { MPI Init(&argc, &argv); } int myrank, nprocs; MPI Comm rank(MPI COMM WORLD, &myrank); MPI Comm size(MPI COMM WORLD, &nprocs); float vec[2]; vec[0]= 2∗myrank; vec[1]= vec[0]+1; // Local accumulation float local= std::abs(vec[0]) + std::abs(vec[1]); // Global accumulation float global= 0.0f; MPI Status st; // Receive from predecessor if (myrank > 0) MPI Recv(&global, 1, MPI FLOAT, myrank−1, 387, MPI COMM WORLD, &st); // Increment global+= local; // Send to successor if (myrank+1 < nprocs) MPI Send(&global, 1, MPI FLOAT, myrank+1, 387, MPI COMM WORLD); else std::cout ≪ ”Hello, I am the last process and I know that |v| 1 is ” ≪ global ≪ ”.\n”; MPI Finalize(); return 0 ; low abstraction level The library performs the reduction. #include #include #include
13.2. MESSAGE PASSING 261 int main (int argc, char∗ argv[]) { MPI Init(&argc, &argv); } Because: int myrank, nprocs; MPI Comm rank(MPI COMM WORLD, &myrank); MPI Comm size(MPI COMM WORLD, &nprocs); float vec[2]; vec[0]= 2∗myrank; vec[1]= vec[0]+1; // Local accumulation float local= std::abs(vec[0]) + std::abs(vec[1]); // Global accumulation float global; MPI Allreduce (&local, &global, 1, MPI FLOAT, MPI SUM, MPI COMM WORLD); std::cout ≪ ”Hello, I am process ” ≪ myrank ≪ ” and I know too that |v| 1 is ” ≪ global ≪ ”.\n”; MPI Finalize(); return 0 ; • Higher abstraction: • MPI implementation usually adapted the underlying hardware: typically logarithmic effort; can be tuned in assember for network card
- Page 210 and 211: 210 CHAPTER 7. EFFECTIVE PROGRAMMIN
- Page 212 and 213: 212 CHAPTER 7. EFFECTIVE PROGRAMMIN
- Page 214 and 215: 214 CHAPTER 7. EFFECTIVE PROGRAMMIN
- Page 216 and 217: 216 CHAPTER 7. EFFECTIVE PROGRAMMIN
- Page 218 and 219: 218 CHAPTER 7. EFFECTIVE PROGRAMMIN
- Page 220 and 221: 220 CHAPTER 7. EFFECTIVE PROGRAMMIN
- Page 222 and 223: 222 CHAPTER 7. EFFECTIVE PROGRAMMIN
- Page 225 and 226: Finite World of Computers Chapter 8
- Page 227 and 228: 8.2. MORE NUMBERS AND BASIC STRUCTU
- Page 229 and 230: 8.2. MORE NUMBERS AND BASIC STRUCTU
- Page 231 and 232: 8.4. THE OTHER WAY AROUND 231 As ca
- Page 233 and 234: How to Handle Physics on the Comput
- Page 235 and 236: Programming tools Chapter 10 In thi
- Page 237 and 238: 10.2. DEBUGGING 237 T& glas::contin
- Page 239 and 240: 10.3. VALGRIND 239 Stepi and Nexti
- Page 241 and 242: 10.5. UNIX AND LINUX 241 • top: l
- Page 243 and 244: C ++ Libraries for Scientific Compu
- Page 245 and 246: 11.3. BOOST.BINDINGS 245 • Math a
- Page 247 and 248: 11.3. BOOST.BINDINGS 247 #include
- Page 249 and 250: 11.4. MATRIX TEMPLATE LIBRARY 249 c
- Page 251 and 252: 11.7. GEOMETRIC LIBRARIES 251 11.7.
- Page 253 and 254: Real-World Programming Chapter 12 1
- Page 255 and 256: 12.1. TRANSCENDING LEGACY APPLICATI
- Page 257 and 258: 12.1. TRANSCENDING LEGACY APPLICATI
- Page 259: Parallelism Chapter 13 13.1 Multi-T
- Page 263 and 264: Numerical exercises Chapter 14 In t
- Page 265 and 266: 14.1. COMPUTING AN EIGENFUNCTION OF
- Page 267 and 268: 14.1. COMPUTING AN EIGENFUNCTION OF
- Page 269 and 270: 14.1. COMPUTING AN EIGENFUNCTION OF
- Page 271 and 272: 14.1. COMPUTING AN EIGENFUNCTION OF
- Page 273 and 274: 14.3. THE SOLUTION OF A SYSTEM OF D
- Page 275 and 276: 14.4. GOOGLE’S PAGE RANK 275 Taki
- Page 277 and 278: 14.5. THE BISECTION METHOD FOR FIND
- Page 279 and 280: 14.6. THE NEWTON-RAPHSON METHOD FOR
- Page 281 and 282: 14.7. SEQUENTIAL NOISE REDUCTION OF
- Page 283 and 284: 14.7. SEQUENTIAL NOISE REDUCTION OF
- Page 285 and 286: Programmierprojekte Kapitel 15 Die
- Page 287 and 288: 15.6. MATRIX-SKALIERUNG 287 Siehe h
- Page 289 and 290: 15.10. ANWENDUNG MTL4 AUF TYPEN MIT
- Page 291 and 292: Acknowledgement Chapter 16 Special
- Page 293: Bibliography [AG04] David Abrahams
13.2. MESSAGE PASSING 261<br />
int main (int argc, char∗ argv[])<br />
{<br />
MPI Init(&argc, &argv);<br />
}<br />
Because:<br />
int myrank, nprocs;<br />
MPI Comm rank(MPI COMM WORLD, &myrank);<br />
MPI Comm size(MPI COMM WORLD, &nprocs);<br />
float vec[2];<br />
vec[0]= 2∗myrank; vec[1]= vec[0]+1;<br />
// Local accumulation<br />
float local= std::abs(vec[0]) + std::abs(vec[1]);<br />
// Global accumulation<br />
float global;<br />
MPI Allreduce (&local, &global, 1, MPI FLOAT, MPI SUM, MPI COMM WORLD);<br />
std::cout ≪ ”Hello, I am process ” ≪ myrank ≪ ” and I know too that |v| 1 is ” ≪ global ≪ ”.\n”;<br />
MPI Finalize();<br />
return 0 ;<br />
• Higher abstraction:<br />
• MPI implementation usually adapted the underlying hardware: typically logarithmic ef<strong>for</strong>t;<br />
can be tuned in assember <strong>for</strong> network card