10.07.2015 Views

Hybrid MPI and OpenMP programming tutorial - Prace Training Portal

Hybrid MPI and OpenMP programming tutorial - Prace Training Portal

Hybrid MPI and OpenMP programming tutorial - Prace Training Portal

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

— skipped —Remarks on <strong>MPI</strong> <strong>and</strong> PGAS (UPC & CAF)— skipped —Remarks on <strong>MPI</strong> <strong>and</strong> PGAS (UPC & CAF)• Point-to-point neighbor communication– PGAS or <strong>MPI</strong> nonblocking may fitif message size makes sense for overlapping.• Collective communication– Library routines are best optimized– Non-blocking collectives (comes with <strong>MPI</strong>-3.0)versus calling <strong>MPI</strong> from additional communication thread– Only blocking collectives in PGAS library?• For extreme HPC (many nodes x many cores)– Most parallelization may still use <strong>MPI</strong>– Parts are optimized with PGAS, e.g., for better latency hiding– PGAS efficiency is less portable than <strong>MPI</strong>– #ifdef … PGAS– Requires mixed <strong>programming</strong> PGAS & <strong>MPI</strong> will be addressed by <strong>MPI</strong>-3.0<strong>Hybrid</strong> Parallel ProgrammingSlide 145 / 154Rabenseifner, Hager, Jost<strong>Hybrid</strong> Parallel ProgrammingSlide 146 / 154Rabenseifner, Hager, JostOutline• Introduction / Motivation• Programming models on clusters of SMP nodes• Case Studies / pure <strong>MPI</strong> vs hybrid <strong>MPI</strong>+<strong>OpenMP</strong>• Practical “How-To” on hybrid <strong>programming</strong>• Mismatch Problems• Opportunities:Application categories that can benefit from hybrid parallelization• Thread-safety quality of <strong>MPI</strong> libraries• Tools for debugging <strong>and</strong> profiling <strong>MPI</strong>+<strong>OpenMP</strong>• Other options on clusters of SMP nodes• SummaryAcknowledgements• We want to thank– Co-authors who could not be here for presenting their slides:• Gabriele Jost• Georg Hager– Other contributors:• Gerhard Wellein, RRZE• Alice Koniges, NERSC, LBNL• Rainer Keller, HLRS <strong>and</strong> ORNL• Jim Cownie, Intel• KOJAK project at JSC, Research Center Jülich• HPCMO Program <strong>and</strong> the Engineer Research <strong>and</strong> DevelopmentCenter Major Shared Resource Center, Vicksburg, MS(http://www.erdc.hpc.mil/index)<strong>Hybrid</strong> Parallel ProgrammingSlide 147 / 154Rabenseifner, Hager, Jost<strong>Hybrid</strong> Parallel ProgrammingSlide 148 / 154Rabenseifner, Hager, Jost

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!