Important update: Getting familiar with OpenMP before the course is strongly recommended. You can visit the Introduction to OpenMP Offloading with AMD GPUs course on October 22.
In late 2024 HLRS is going to install its next supercomputer system called “Hunter”. As for every new system, users have to spend some effort into porting their code and workflow to the new environment. Furthermore, the vast majority of Hunter’s compute power will be delivered by GPUs, which requires users to adapt their hot loops in order to offload them to the GPUs. In this workshop we will hence support our users in doing both.
As Hunter is going to deploy the Cray Programming Enviroment (CPE) which is available (but not default!) on Hawk as well, it is possible to do the first porting steps already on Hawk. This allows to extend the timeframe available for porting. The work done in this workshop can be immediately re-used on Hunter. Those first porting steps will include building with the respective compilers, linking against highly optimized numerical libraries provided by CPE, and using CPE’s performance analysis tools. Based on the latter, the time till arrival of the new system can be used to identify and offload hot loops.
In order to offload to GPUs, multiple programming models (HIP, OpenMP device offloading, PSTL/do concurrent) are available depending on the programming language used. In this workshop, we will also discuss pros and cons of those models based on your situation and provide you with information on how to use them.
Besides HLRS user support staff, HPE and AMD specialists will be available to help with issues related to tools provided by those companies.
Due to the large number of groups using HLRS’ systems but limited support staff, the number of participants needs unfortunately to be limited. In order to use the system as efficiently as possible, we have to focus on groups holding medium and large compute time budgets. We hence reserve the right to select attendees!
To allow for easy attendance, we decided to provide this workshop in a hybrid fashion. Besides meeting in person at HLRS, we will also setup breakout rooms in a Zoom session, which enable remote participants to communicate as well as share screens and remote control applications with support staff, hence providing the same options of interaction as meeting in person.
Target audience: Groups holding a compute time budget to be used on Hawk and Hunter.
This hybrid event will take place online and at HLRS, University of Stuttgart Nobelstraße 19 70569 Stuttgart, Germany Location and nearby accommodations
11. Nov 2024 09:00
15. Nov 2024 13:00
Hybrid Event - Stuttgart, Germany
Englisch
Fortgeschritten
Performance-Optimierung & Debugging
Hardware-Beschleuniger
Code-Optimierung
MPI
MPI+OpenMP
OpenMP
Zurück zur Liste
Getting familiar with OpenMP before the course is strongly recommended. You can visit the Introduction to OpenMP Offloading with AMD GPUs course on October 22.
In order to attend the workshop, you should already have an account on Hawk and your application should already be running on the system. Furthermore, we require that you bring your own code including a test case which is set up according to the following rules:
use case selection:
When processing the test case, your code should have a behavior and profile which is as close as possible to that of current and future production runs.
If possible, the test case should be representative for those production runs of your group which consume the largest part of your compute time budget.
number of cores:
In order to be representative, the test case should be in size comparable to the respective current and future production runs.
In order to save valuable resources and to allow for a productive workflow, it should, however, be as small as possible.
So, please, be prepared to reduce the size of your test case during the workshop! This can often be achieved by reducing the simulated domain or resolution and keep the computational load per core constant ("weak down scaling").
wall time:
In order to allow for a productive workflow, the wall time should be a few minutes only.
At the same time, it should cover all important parts of the code, i.e. computation, communication and I/O.
So take into account to reduce the number of simulated time steps and increase I/O frequency to render investigation of I/O possible within such a low number of timesteps.
If you are unsure about how to set up your test case, please contact Björn Dick (contact data can be found below).
In general the language of instruction is German, but can be changed to English, if required.
Learn more about course curricula and content levels.
HLRS, HPE and AMD user support staff
Handouts will be available to participants as PDFs.
This course will be hybrid, i.e. it will take place at HLRS on-site but it will also be possible to attend online. Participants, online as well as on-site, have to be aware and agree that they might appear in the live video stream taken by a camera in the back of the lecture room or by a webcam on laptops. The live stream will not be saved. We strongly recommend to attend this course on-site since on-site attendance is much more effective and efficient in our experience. Therefore we might give priority to on-site over online participants during registration.
Registration closes on Sunday, November 3, 2024.
Late registrations after the registration phase are still possible according to the course capacity.
Further fee categories can be found in the registration page.
Our course fees include coffee breaks (in classroom courses only).
Björn Dick phone 0711 - 685 87189, bjoern.dick(at)hlrs.de Khatuna Kakhiani phone 0711 685 65796, training(at)hlrs.de
HLRS is part of the Gauss Centre for Supercomputing (GCS), together with JSC in Jülich and LRZ in Garching near Munich. EuroCC@GCS is the German National Competence Centre (NCC) for High-Performance Computing. HLRS is also a member of the Baden-Württemberg initiative bwHPC.
See the training overview and the Supercomputing Academy pages.
November 04 - Dezember 06, 2024
Online (flexible)
Dezember 02 - 05, 2024
Online by JSC
Dezember 09 - 13, 2024
Online
Januar 13 - 31, 2025
Januar 21 - 23, 2025
Februar 17 - 21, 2025
Stuttgart, Germany
März 17 - 21, 2025
Dresden, Germany