Combining HPC with Computers in Wind Farms

Photograph of three wind turbines over a flat landscape.
Photo: Sam Forson, Pexels

HLRS is coordinating a new research project called WindHPC, which will test strategies for making high-performance computing systems, software, and workflows more energy efficient.

The success of the wind energy sector will be crucial for achieving carbon neutrality in Germany. One challenge the field currently faces, however, is to guarantee that the power generated at wind farms is used as efficiently as possible. In 2019, for example, it is estimated that wind turbines across the country generated approximately 5.4 terawatt-hours of surplus energy that were never used because of congestion on the power grid.1 At the same time, growth in the IT sector across Germany is a major source of rising energy demand, with a current usage of 16 terawatt-hours.2 Although computing infrastructure is important for the country’s economic competitiveness, IT will need to use renewable energy more efficiently, if Germany is to fulfill its energy needs while reaching its carbon emissions reduction goals.

The High-Performance Computing Center Stuttgart (HLRS) recently began coordinating a new research project that is investigating strategies to address these challenges in a coordinated way. Called WindHPC (In Windkraftanlagen integrierte Second-Life-Rechencluster), the project will for the first time combine computing infrastructure located at wind energy generation sites with a high-performance computing center. By distributing certain computing tasks to wind-powered infrastructure, the project aims both to use surplus energy produced at wind farms efficiently and to increase the amount of green energy used in computationally demanding research.

WindHPC will pursue a holistic strategy that considers not just how the hardware used for simulations affects energy efficiency, but also other critical elements of the problem-solving process that affect power consumption. This means looking closely at features such as how compute tasks are assigned within a distributed computing architecture, how the data resulting from a simulation are managed, and how simulation algorithms are chosen. At all of these levels, WindHPC will carefully monitor power consumption and performance metrics as a basis for cost-benefit analyses that improve sustainability in HPC.

As lead coordinator of WindHPC, HLRS is collaborating with a consortium of partners from academic research and industry. Among these is WestfalenWIND IT GmbH & Co KG | windCORES, a company that in 2019 received the German Computing Center Prize for its approach of powering computer racks onsite at wind energy generation facilities. Other partners include researchers from Helmut Schmidt University (HSU), the Technical University of Munich (TUM), the Technical University of Berlin (TUB), the RPTU Kaiserslautern-Landau (RPTU), and the Visualization Institute of the University of Stuttgart (VISUS).

WindHPC is funded by the German Federal Ministry of Education and Research (BMBF) as part of its GreenHPC initiative.

Making simulation more sustainable

The HPC industry has in recent years been driven by a demand for new, more powerful systems that deliver simulations at higher precision in less time. WindHPC is testing an alternative strategy, however, that incorporates smaller, decommissioned computing infrastructure in a so-called “second life cycle.” Using donated hardware, WindHPC will locate a second life system at a wind park site, and a software scheduling system will distribute computing tasks to it at times when excess wind energy is available. Using high-speed data networks maintained by the German Research Network (DFN), these remote systems will be connected to HLRS’s Hawk supercomputer.

Distributing a simulation across a network in this way presents challenges for the movement and storage of data, and so computer scientists at HLRS will optimize the workflows necessary to do so. As result data are produced during simulations, they will also implement metadata frameworks that adhere to FAIR principles (findability, accessibility, interoperability, and reusability). This will make it possible to integrate them into repositories, such as the MolMod database of molecular models managed by RPTU Kaiserslautern-Landau. This approach could reduce the need for other researchers to run similar simulations in the future, further reducing the energy demand of scientific research.

WindHPC is pursuing a holistic strategy that will investigate several critical elements of the problem-solving process that affect power consumption.

As simulations are run on this unusual computing architecture, WindHPC will capture and analyze energy and performance data at the node level. Codes running on the system will be monitored and their power consumption will be assessed based on key energy metrics. HLRS together with TUB and HSU will research and develop new methods for using this information in the auto-tuning of algorithms and intelligent scheduling of a simulation. The goal is to formulate and distribute parallel programming tasks automatically across nodes in ways that optimize the use of computing resources and reduce power consumption. At the cluster level, engineers will monitor efficiency in the combined usage of Hawk and the wind park-based system, studying issues such as system life cycle management and the effect of fluctuations in power production capacities resulting from, for example, changing wind conditions.

As use cases, WindHPC will focus on HPC applications for process engineering in the chemical industry provided by scientists at the Technical University of Berlin and RPTU. These applications will be developed further into digital twins useable by other researchers. In addition, the WindHPC team will look closely at the power consumption necessary for the visualization of scientific results as a final step in the simulation-based problem-solving workflow. Here software provided by VISUS for the visualization of simulation results, called MegaMol, will be optimized with respect to the selection of visualization methods to reduce power needs.

Approximate computing could reduce energy demand

WindHPC will develop and test methods that could help scientists choose algorithms in ways that consider their environmental impacts in a more systematic way. By setting performance benchmarks and comparing the results of different simulation methods, the team aims to gain a better understanding of when to use an approach called “approximate computing.” Here, the goal is to balance the need for results at a precision that is sufficient for scientific and technical progress with the demand of using as little energy as necessary.

Molecular dynamics simulation, for example, is a typical application of high-performance computing that can provide accurate simulations of atomic-scale phenomena, but is computationally very demanding. Classical CFD simulations, on the other hand, require less compute resources for a problem of the same size but have a much lower spatial resolution. WindHPC scientists will lead a cost-benefit analysis to evaluate the relationship between the knowledge produced by such different simulation methods and their energy requirements. The results could help to begin answering important questions facing the future of HPC: Is the knowledge acquired from certain kinds of simulations commensurate with the energy they consume? In what situations could smaller-scale, less precise, and less energy-intensive algorithms and simulations still deliver scientists the information they need to advance their research?

In this way, WindHPC will not only test the viability of computing at a green energy source, but also explore a variety of perspectives that together could make supercomputing more sustainable.

— Christopher Williams

 

References

1. Bundesnetzagentur, Quartalsbericht Netz- und Systemsicherheit – Gesamtes Jahr 2019, 2020.

2. https://www.bitkom.org/Presse/Presseinformation/Deutsche-Rechenzentren-Wachstumskurs