Optimized to provide large-scale computing power for complex simulations, Hawk was the flagship supercomputer of the High-Performance Computing Center Stuttgart from 2020 until early 2025. Its CPU-based architecture includes a GPU partition suitable for high-performance data analytics and artificial intelligence applications, and for hybrid workflows that combine high-performance computing and AI. With a theoretical peak performance of 26 Petaflops, Hawk debuted in 2020 at #16 on the Top500 List.
Hawk was taken out of service in April 2025. Its AI expansion including NVIDIA A100 GPUs remains in operation.
Funding for the Hawk supercomputer is provided by the German Federal Ministry of Education and Research and the Baden-Württemberg Ministry for Science, Research and Arts through the Gauss Centre for Supercomputing.
![]() | ![]() |
| Number of cabinets | 32 |
| Number of compute nodes | 4096 |
| System peak performance | 26 Petaflops |
| CPUs per node | 2 |
| Cores per CPU | 64 |
| Number of compute cores | 524.288 |
| CPU frequency | 2.25 GHz |
| DIMMs in system | 65.536 |
| Total system memory | ~1 PB |
| Number of cabinets | 4 |
| Nodes per cabinet | 6 |
| Nodes in system | 24 |
| CPU Type | AMD EPYC |
| GPUs per node | 8 |
| GPU type | NVIDIA A100 |
| GPUs in system | 192 |
| AI performance: node | ~5 PFlops |
| AI performance: system | ~120 PFlops |
| Node to node interconnect | Dual Rail InfiniBand HDR200 |
| Disks in system | ~2,400 |
| Capacity per disk | 14 TB |
| Total disk storage capacity | ~42 PB |
If you are a current HLRS system user, click here to report a problem, check on Hawk's operating status, or find technical documentation or other information related to your project.
| Rack type: frontend and service nodes | Adaptive Rack Cooling System (ARCS) |
| Racks: frontend and service nodes | 5 + 2 ARCS Cooling Towers |
| Frontend nodes | 10 x HPE ProLiant DL385 Gen10 |
| Memory of frontend nodes | 5 x 1 TB, 4 x 2 TB, 1 x 4 TB |
| Data mover nodes | 4 x HPE ProLiant DL385 Gen10 |
| Service nodes | Red Hat Enterprise Linux 8 |
| Interconnect topology | Enhanced 9D-Hypercube |
| Interconnect bandwidth | 200 Gbit / s |
| Total InfiniBand cables | 3,024 |
| Total cable length | ~20 km |
| Maximum power consumption per rack | ~90 kW |
| Power supplies in system | 2,112 |
| System power consumption, normal operation | ~3.5 MW |
| System power consumtion, LinPack operation | ~4.1 MW |
| Cooling distribution units (CDUs) | 6 |
| Water inlet temperature (CDUs) | 25°C |
| Water return temperature (CDUs) | 35°C |
| Volume of cooling liquid in the system | ~2.5 m³ |
| Water inlet temperature (ARCS cooling towers) | 16°C |
| Water evaporation by wet cooling towers | ~9 m³/h |