The goal of this course is to give people with some programming experience an introduction to the MPI and OpenMP parallel programming models. It starts at beginner level, but also covers advanced features of the current standards. Hands-on exercises (in C, Fortran, and Python) will allow participants to immediately test and understand the Message Passing Interface (MPI) constructs and OpenMP's shared memory directives.
The first block (Wed+Thu, Aug 17-18, 2022) will be an introduction to OpenMP, covering also newer OpenMP 4.0/4.5/5.0 features such as the vectorization directives, thread affinity, and OpenMP places. (Fri-Sun are not course days).
The second block (Mon+Tue, Aug 22-23, 2022) is an introduction to MPI, which includes a comprehensive introduction to nonblocking MPI communication, but also covers some intermediate topics such as derived data types and virtual process topologies.
The third block (Wed+Thu, Aug 24-25, 2022) is dedicated to intermediate & advanced methods in MPI, e.g., the group and communicator concept, advanced derived data types and one-sided communication. This block also includes latest features of MPI-3.0/3.1/4.0, e.g., the shared memory programming model within MPI, the new Fortran language binding, nonblocking collectives, neighborhood communication, and long counts.
You can choose any of the three 2-day blocks individually or combined.
Online course Organizer: Scientific IT Services at ETHzürich, Zürich, Switzerland.
17. Aug 2022 08:45
25. Aug 2022 16:00
Online by ETH
Englisch
Basis
Mittel
Paralleles Programmieren
MPI
OpenMP
Zurück zur Liste
Linux command shell commands and some basic programming knowledge in C or Fortran (or Python for the MPI part).
To be able to do the hands-on exercises of this course, you need a computer with an OpenMP capable C/C++ or Fortran compiler and a corresponding, up-to-date MPI library (in case of Fortran, the mpi_f08 module is required).
If you have access, you can also use a high performance compute (HPC) cluster for the exercises (such as e.g. Euler, for members of ETH Zurich; Euler has these software requirements readily available). Please note that the course organizers will not grant you access to an HPC system nor any other compute environment. Therefore, please make sure to have a functioning working environment / access to an HPC cluster prior to the course.
To check if your MPI and OpenMP installation is valid, please either
After that follow the instructions in TEST/README.txt within the archive.
In addition, you can perform most MPI exercises in Python with mpi4py + numpy. In this case, an appropriate installation on your system is required (together with a C/C++ or Fortran installation for the other exercises).
The optional exercise about race-condition detection (in the morning of the 2nd day) requires an installation of a race-condition detection tool, e.g., the Intel Inspector together with the Intel compiler. It is recommended to install it.
Please make sure to have a functioning working environment / access to an HPC cluster prior to the start of the course. In case of questions, please contact the course organizer (see below).
Learn more about course curricula and content levels.
Dr. Rolf Rabenseifner is a world-renowned expert in parallel computing and teacher of courses in the areas of parallel programming with the Message Passing Interface (MPI), shared memory parallelization with OpenMP, and the Partitioned Global Address Space (PGAS) languages UPC and Co-Array Fortran.
For details, see the six-day course agenda / content (preliminary).
The course hours are usually between 9:00-16:00, with Zoom login starting at 8:45, and two exceptions:
On Day 4 (Tue, Aug 23), the MPI beginners may choose an additional exercise and will be supported until 17:00 if they wish.
On Day 5 (Wed, Aug 24), there will be an additional slot from 16:00-16:30 for the Fortran users of MPI.
Please refer to the course web-page at ETHzürich for registration, fee and contact information.
http://www.hlrs.de/training/2022/ETH and https://sis.id.ethz.ch/services/consultingtraining/mpi_openmp_course.html
See the training overview and the Supercomputing Academy pages.
Januar 13 - 31, 2025
Hybrid Event - Stuttgart, Germany
Januar 21 - 23, 2025
Februar 17 - 21, 2025
Stuttgart, Germany
März 17 - 21, 2025
Dresden, Germany