Most HPC systems are clusters of shared memory nodes. To use such systems efficiently both memory consumption and communication time has to be optimized. Therefore, hybrid programming may combine the distributed memory parallelization on the node interconnect (e.g., with MPI) with the shared memory parallelization inside of each node (e.g., with OpenMP or MPI-3.0 shared memory). This course analyzes the strengths and weaknesses of several parallel programming models on clusters of SMP nodes. Multi-socket-multi-core systems in highly parallel environments are given special consideration. MPI-3.0 has introduced a new shared memory programming interface, which can be combined with inter-node MPI communication. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared with various hybrid MPI+OpenMP approaches and pure MPI. Numerous case studies and micro-benchmarks demonstrate the performance-related aspects of hybrid programming.
Hands-on sessions are included on all days. Tools for hybrid programming such as thread/process placement support and performance analysis are presented in a "how-to" section. This course provides scientific training in Computational Science and, in addition, the scientific exchange of the participants among themselves.
This online course is a PRACE training event. It is organised by LRZ in cooperation with NHR@FAU, RRZE and the VSC Research Center, TU Wien.
Online course Organizer: Leibniz Supercomputing Centre of the Bavarian Academy of Sciences and Humanities (LRZ), D-85748 Garching near Munich, Germany
Jun 22, 2022 09:00
Jun 24, 2022 16:00
Online by LRZ
English
Advanced
Parallel Programming
MPI
OpenMP
Back to list
Basic MPI and OpenMP knowledge. For the hands-on sessions you should know Unix/Linux and either C/C++ or Fortran.
Learn more about course curricula and content levels.
Dr. Claudia Blaas-Schenner (VSC Research Center, TU Wien), Dr. habil. Georg Hager (RRZE/HPC, Uni. Erlangen), Dr. Rolf Rabenseifner (HLRS, Uni. Stuttgart).
1st day – 22 June 2022
08:45 Join online 09:00 Welcome 09:05 Motivation 09:15 Introduction 09:30 Programming Models 09:35 - MPI + OpenMP 10:00 Practical (how to compile and start) 10:30 Break 10:45 - continue: MPI + OpenMP 11:30 Break 11:45 - continue: MPI + OpenMP 12:30 Practical (how to do pinning) 13:00 Lunch 14:00 Practical (hybrid through OpenMP parallelization) 15:30 Q & A, Discussion 16:00 End of first day
2nd day – 23 June 2022
08:45 Join online 09:00 - Overlapping Communication and Computation 09:30 Practical (taskloops) 10:30 Break 10:45 - MPI + OpenMP Conclusions 11:00 - MPI + Accelerators 11:30 Tools 11:45 Break 12:00 Programming Models (continued) 12:05 - MPI + MPI-3.0 Shared Memory 13:00 Lunch 14:00 Practical (replicated data) 15:30 Q & A, Discussion 16:00 End of second day
3rd day – 24 June 2022
08:45 Join online 09:00 - MPI Memory Models and Synchronization 09:40 - Pure MPI 10:00 Break 10:15 - Recap - MPI Virtual Topologies 10:45 - Topology Optimization 11:15 Break 11:30 Practical/Demo (application aware Cartesian topology) 12:30 - Topology Optimization (Wrap up) 12:45 Conclusions 13:00 Lunch 14:00 Finish the hands-on labs, Discussion, Q & A, Feedback 16:00 End of third day (course)
Registration via the PRACE course page. Registration closes on June 8, 2022.
For further information, see also the LRZ course page.
This course is a PRACE Advanced Training Center event. Therefore, the course is free of charge for all participants from the EU or from PRACE-member-countries.
https://www.hlrs.de/training/2022/HY-LRZ (at HLRS), course page at LRZ and https://events.prace-ri.eu/e/VSC-2021-HY (at PRACE)
Please contact education(at)lrz.de
See the training overview and the Supercomputing Academy pages.
November 04 - December 06, 2024
Online (flexible)
December 02 - 05, 2024
Online by JSC
January 13 - 31, 2025
Hybrid Event - Stuttgart, Germany
January 21 - 23, 2025
February 17 - 21, 2025
Stuttgart, Germany
March 17 - 21, 2025
Dresden, Germany